'r*
f
IflVflL RESEARCH
URRTERLy
SEPTEMBER 1976
VOL. 23, NO. 3
OFFICE OF NAVAL RESEARCH
NAVSO P-1278
yov-3
NAVAL RESEARCH LOGISTICS QUARTERLY
EDITORS
Munay A. Geisler
Logistics Management Institute
W. H. Marlow
The George Washington University
Bruce J. McDonald
Office of Naval Research
MANAGING EDITOR
Seymour M. Selig
Of^ce of Naval Research
Arlington, Virginia 22217
ASSOCIATE EDITORS
Marvin Denicoff
Office of Naval Research
Alan J. Hoffman
IBM Corporation
Neal D. Glassman
Office of Naval Research
Jack Laderman
Bronx, New York
Thomas L. Saaty
University of Pennsylvania
Henry Solomon
The George Washington University
The Naval Research Logistics Quarterly is devoted to the dissemination of scientific information in logisti
will publish research and expository papers, including those in certain areas of mathematics, statistics, and ecoin
relevant to the over-all effort to improve the efficiency and effectiveness of logistics operations.
Information for Contributors is indicated on inside back cover.
The Naval Research Logistics Quarterly is published by the Office of Naval Research in the months of MarcMi
September, and December and can be purchased from the Superintendent of Documents, U.S. Government Fn
Office, Washington, D.C. 20402. Subscription Price: $11.15 a year in the U.S. and Canada, $13.95 elsewhere. <s
individual issues may be obtained from the Superintendent of Documents.
The views and opinions expressed in this Journal are those of the authors and not necessarily those of the f
of Naval Research.
Issuance of this periodical approved in accordance with Department of the Navy Publications and Printing Regui"
P-35 (Revised 1-74).
A SURVEY OF MAINTENANCE MODELS: THE CONTROL
AND SURVEILLANCE OF DETERIORATING SYSTEMS*
William P. Pierskalla and John A. Voelker
Department of Industrial Engineering and Management Sciences
Northwestern University
Evanston, Illinois
ABSTRACT
The literature on maintenance models is surveyed. The focus is on work appearing since
the 1965 survey, "Maintenance Policies for Stochastically Failing Equipment: A Survey"
by John McCall and the 1965 book. The Mathematical Theory of Reliability, by Richard
Barlow and Frank Proschan. The survey includes models which involve an optimal decision
to procure, inspect, and repair and/or replace a unit subject to deterioration in service.
ITRODUCTION
For nearly two decades, there has been a large and continuing interest in the study of maintenance
)dels for items with stochastic failure. This interest has its roots in many military and industrial ap-
jcations. Lately, however, new applications have arisen in such areas as health, ecology, and the envi-
rjiment. Although it is not possible to detail these many applications of maintenance models, some of
fi'm are: the maintenance of complex electronic and/or mechanical equipment, maintenance of the
liman body, inspection and control of pollutants in the environment, and maintenance of ecological
lance in populations of plants and animals.
:ust as the interest in maintainability has grown and changed, so has the sophistication of the models
a 1 control policies for solving the maintenance problems. Many of the important early maintenance
■1/or inspection models possessed an elegance and simplicity which led to easily implementable poli-
os. These models were later generalized; and although much of the elegance and simplicity remains,
wae of the new results are complex and require the use of large computers for implementation.
In two earlier and excellent works, the area of maintainability was rather thoroughly researched and
Mveyed up to 1965. These works are: The Mathematical Theory of Reliability, by Richard Barlow and
Fink Proschan [1965] and the paper, "Maintenance Policies for Stochastically Failing Equipment:
A urvey," by John McCall [1965]. Here, the presentations of these earlier works will not be repeated,
i rather the coverage is primarily the results which have appeared since 1965. There are a few
Reptions to this rule, however, when it was necessary to review some key models which appeared prior
"This survey was produced in part as a result of research sponsored by the Office of Naval Research under Contract N00014-
«A-0356-0030, Task NR042-322. Reproduction in whole or in part is permitted for any purpose of the United States
I eminent.
353
354 W. P. PIERSKALLA AND J. A. VOELKER
to 1965 and which formed the foundation of many of the later studies. Furthermore, in this survey it j
assumed the reader is familiar with such concepts as Markov processes, Poisson processes, dynam
programming, linear programming, Lagrange multipliers, and other concepts and techniques with whic
the holder of a master's degree or an experienced practitioner in statistics or operations research woul
be familiar.
Since there are many threads of activity which contribute to the total fabric of maintainability an
since it is not possible to include all of them in a reasonably sized survey paper, this study includt
only those models which involve an optimal decision to procure, inspect, repair and/or replace a un
subject to deterioration in service. Not included are models which describe the operating characteristic
of a system, such as repairmen or machine interference models, unless these models involve some opt
mization as well. Also not included are models which deal with the controlled reliability of a system sue
as the design of redundant systems unless they also include aspects of optimal inspection, repair, and/<
replacement decisions. Each of these excluded areas is large, important, and worthy of a survey j
its own right. Furthermore, each lies slightly outside of what many people would call maintainabilit
Even with these restrictions, however, the reader will see that the many topics covered in this survey at
rich with theoretical significance and practical application.
There are many possible ways to classify the works in maintainability. One could establish a multi
dimensional grid whose coordinates would be (i) states of the system, such as deterioration level, ag,
number of spares, number of units in service, number of state variables, etc., (ii) actions available, sue
as repair, replacement, opportunistic replacement, replacement of spares, continuous monitorin
discrete inspections, destructive inspections, etc., (ii) the time horizon involved, such as finite or infini
and discrete or continuous, (iv) knowledge of the system, such as complete knowledge or partial know
edge involving such things as noisy observation of the states, unknown costs, unknown failure distrib
tions, etc., (v) stochastic or deterministic models, (vi) objectives of the system, such as minimize lor
run expected average costs-per-unit time, minimize expected total discounted costs, minimize tot
costs, etc., and (vii) methods of solution, such as linear programming, dynamic programming, generalize
Lagrange multipliers, etc.
Then in each cell of the grid, one could conceivably place every paper written on maintainabilit
The empty cells would mean those research areas were irrelevant or not yet explored. To some exter
McCall [1965] uses such a classification scheme (on a modest scale) by considering the primary catego
ies of "preparedness" and "preventive maintenance" models with and without complete informatio
Although classification schemes based on some breakdown involving two or more of the sev<
categories mentioned above are useful for establishing an underlying general theory of maintainabili
and also for exposing some unexplored areas, such a scheme has not been rigidly followed here. Rath
the papers have been classified in such a manner that a practitioner might be able to find the mode
relevant to his maintenance problem (to a large extent this approach parallels that taken by Barlow ar
Proschan [1965]. Thus, there are two major sections with several subsections in each.
The first section surveys discrete time maintenance models. That is, at discrete points in time, aun
(or units) is monitored and a decision is made to repair, replace, and/or restock the unit(s). The secor
section surveys continuous time maintenance models. Actions and events are not a priori restricted
occur only within a discrete subset of the time axis.
In the discrete time section, most of the models are Markov decision models in which the state'
MAINTENANCE MODELS SURVEY 355
|e system is described by the level of deterioration and/or the number of spare units available in in-
ntory. This section is subdivided by segregating models with no restocking from those which involve
inventory (restocking) decision. The subsection on models with no restocking is further subdivided
, the basis of kinds of information available.
The continuous time models are subdivided into several topic categories also. The first of these is
i'ncerned with applications of control theory to maintainability. The control function m(t) acquires
is interpretation "rate of maintenance expenditure at time t." By the use of control theory, m{t) is
j'lected to maximize discounted return. The second subsection deals with age replacement models
I'ireby giving new enhancements to the classical model, such as age dependent operating cost and the
t timal provision of spares. Also described in this subsection are certain policies that regulate time of
placement according to the occurrence of the A th failure. The third subsection describes shock models,
i., systems where the unit is subject to external shocks according to some stochastic process, and
t;;se shocks affect its failure characteristics. Subsection four deals with interacting repair activities.
Ii re the decisionmaker must control a system composed of more than one unit.) Specifically discussed
Wi such "system-wide" activities as opportunistic replacement, cannibalization, multistage replace-
i;:nt, and variable repair rate. Also discussed are models involving incomplete information. The
(■cisionmaker's information may be incomplete only in regard to the current state of the system, in
yich case inspection models arise. On the other hand, he may not know completely the probability
\\i governing the system or the actual cost implication of various actions. In the former case, max-min
a'i Bayesian strategies have been devised. In the latter case, the statistical implications of a related
SrTipling procedure are presented. The concluding section indicates some areas for future research.
' Although the authors have tried to give a reasonably complete survey, the reader will note that
translated Eastern European and Asiatic papers are missing. Any other papers that are not included
v re either inadvertently overlooked, not directly bearing on the topics of this survey, or no longer in
p:nt. The authors apologize to both the readers and the researchers if we have omitted any relevant
p|)ers.
1 DISCRETE TIME MAINTENANCE MODEES
In this section, models are reviewed which utilize information regarding the degree of deteriora-
tii of the unit or units in order to select the best action at certain discrete points in time. The deteri-
o tion may be described by such factors as "wear and tear," "fatigue," etc. In some cases, inspection
d isions must be made in order to ascertain the current state of the unit before a repair or replacement
ai on is taken. In others, it may be assumed that the current state is always known at the beginning of
a eriod and the available actions are to replace the unit or to choose one of several repair activities
w ch will tend to decrease the degree of deterioration. Often the decisionmaker must take these actions
...
Ui er conditions of incomplete information about costs, underlying failure laws, or noisy observations of
tl unit's state. Models involving complete knowledge are treated first, then the models with incomplete
in rmation are covered. In these latter models, it is always assumed that there are new replacements
amiable (i.e., an unlimited supply of new spares). The section concludes with models involving the
p< odic restocking of inventories of spare parts.
Because of the Markovian nature of the state and action transition processes, Markov decision
th>ry and inventory theory are the primary approaches used in the model formulations. Consequently,
356 W. P. PIERSKALLA AND J. A. VOELKER
linear programming and dynamic programming are the primary techniques used to solve these mod
COMPLETE INFORMATION
A unit (or several units) is inspected every period and a decision is made to repair or replace
unit whenever it is found to be in a certain set of states. In the absence of a decision to repair or repla
it is assumed that the unit deteriorates stochastically through a finite set of states denoted by the
of integers {0, 1, . . . , L} according to a Markov chain. The state denotes a new or complet
renovated unit and the state L an inoperative or failed unit. After inspecting the unit a decision is m;
either to repair, replace, or do nothing to the units. In most of the models there are only two decisir
every period: decision 1 means do nothing and decision 2 means replace.
Depending upon the assumptions concerning the time horizon, the amount of information av]
able, the nature of the cost functions, the objectives of the models, system constraints, and numb ;
of units, different authors have produced many interesting and significant results for variations on fi
basic model. The basic model was originally introduced by Derman [1962] and extended by Klein [19<
Although these two important papers are clearly presented in Barlow and Proschan [1965] and Derni
[1970], the basic model is briefly given here prior to surveying later generalizations.
The unit is observed at times t — 0, 1, 2, ... to be in one of the states X t e{0, 1, . . ., L}.Ifii
action (decision 1) is taken, then p,j denotes the probability of moving from state i to state j in I
period. If the unit is replaced (decision 2), then the unit moves immediately into state 0, and the trail
tion during the period is governed by the probabilities {poj}- It is assumed that
(1) p,-o = 0, t=0, . . .,L-\,
(2) Plo = 1, and
(3) pfl > for some t and each i = 0, . . . , L — 1.
Condition (1) implies the unit is never as good as "new" after its first period of service; condii'i
(2) implies the unit must be replaced on failure; and condition (3) implies the unit will eventually i
and that the underlying Markov chain, {X,}, has a single ergodic class and the steady state probabili'i
exist.
Upon inspecting the unit at any time, it is possible to replace the item before failure. In this \)
it may be possible to avoid the consequences of failure or further deterioration of the unit. These I
cisions to replace the unit or do nothing are summarized by a set of decision rules (or replacemt
rules) based on the entire history of the process up to time t. Although the most generality is achie d
by considering these rules as elements of the class of nonstationary, randomized rules, conditions he
been established by Derman [1970] which enable one to restrict attention to stationary nonrandomid
rules. These latter rules are of the type that the set {0, 1, . . ., L} is partitioned into two subsets*
and 3). If X t e3% then replace the unit and if X t e3i, do nothing. Most of the models surveyed here sat y
the conditions given by Derman or else restrict their attention only to the class of stationary norm-
domized rules which do not utilize all prior history and can be denoted by /?, for i = 0, . . ., L, whe
Ri is the action taken when X, = i.
By making an intervening decision before observing state L, the behavior of the system is modild
and the evolution of the system under a replacement rule results in a modified Markov chain. 1*
MAINTENANCE MODELS SURVEY 357
oy costs consist of ci to replace a unit that has not yet failed, and a higher cost c- 2 to replace a failed
ut.
The objective is to minimize the expected long run average cost-per-unit time.
By making an additional assumption on the original transition probabilities
L
V p,j is nondecreasing in i = 0, . . ., L — 1
j = k
for each fixed A = 0, . . . , L.
rman has shown that the optimal replacement rule R* is a "control limit" rule; that is, there is a
te i* e{0, 1, . . ., L} such that if the observed state k satisfies k ^ i* then replace the unit and if
i* do nothing. This key result reduces the size of the set of rules R in the minimization from 2 L ~ l
ss to at most L + 1 rules. The same key result holds when the objective is changed to minimize the
ttil long run discounted costs.
Assumption (4) implies that if no replacements are made, the probability of deterioration increases
ajthe initial state increases. Barlow and Proschan [1965] point out that this conditional probability
Equivalent to assuming the original Markov chain is IFR.
i The mathematical programming problems resulting from the above objectives have a natural formu-
I m as dynamic programs, and in this context, successive approximations and policy iteration tech-
I lies may be used (see Bellman [1957], Bellman and Dreyfus [1962], and Howard [1971]). Derman
I provided an interesting formulation in the long run average cost case leading to a linear programming
piblem. On the surface it may appear that the linear programming problem is very difficult to handle
"c most important problems because of its large size; however, recent work in large-scale linear pro-
Slnming (cf. Lasdon [1974]) gives hope for solving these very large Markov decision problems.
; Kolesar [1966] considers the same model with the cost function generalized to allow an "occu-
Bcy" cost, Aj, associated with being in each state i. With the added condition ^ A\ ^ . . . TzLAl,
l^lemonstrates that a control limit rule is optimal in the long run average cost case.
j Now the L + 1 control limit rules can be represented by the integers {i\i = 0,. . ., L}. In his case
W're a control limit rule (or rules, if uniqueness is not present) is optimal, Kolesar shows that the cost
I :tion is integer quasi-convex in i, i.e., if i* is the smallest optimal control limit then <p Ri is non-
in easing for i < i* and nondecreasing for i =5 i* (see Greenberg and Pierskalla [1971] for results on
y si-convexity). This observation enables him to develop more efficient algorithms to solve the average
I problem. He further observes that in the linear programming formulation of the model the re-
it tion to control limit rules allows one to modify the simplex algorithm so that at most L changes of
B.ible bases are needed before the optimal rule is obtained.
. The results of Derman and Kolesar were later extended by Ross [1969a] to the case where the state
-ystem can be represented by an element of some nonempty Borel subset S of the nonnegative
b line. Thus, continuous or denumerable state spaces are now allowed. Let xeS denote the state of
I system upon inspection at the beginning of a period, and F x the cumulative distribution function
J' bribing the next state given that the decision is made not to replace defined for x > 0. The failed state
■enoted by 0. Thus p(x) = F x (0) is the probability the unit currently in state x will fail before the next
K»ection.
358 W. P. PIERSKALLA AND J. A. VOELKER
In parallel with the finite state models, a control limit policy., R y , replaces the unit at time fif X t
or X, > y for some control limit ye(0, °°] and does nothing otherwise. Ross then proves the following]
g(x), the cost of being in state x, is a nondecreasing bounded function of* for* > 0,p x is a nondecM
ing function of x for x > 0, and for each y > the function (1 — F x (y))/(1 — p x ) is a nondecreasi
function of x for x > 0, then there is a control limit policy R y that is optimal in the long run discour;
case. Under some additional conditions on g(x) and F x (y), he then shows that a control limit polici
optimal for the long run average cost case as well.
In a recent paper Kao [1973] considers a different type of generalization of the basic Derman moi
He considers the problem of a deterioration process which moves from state to state L, but the t]
spent in each state before a transition is a random variable depending on the transition. This probjr
is modelled as a discrete time finite state semi-Markov process. Since the underlying process is one
deterioration it is assumed that p,j = for j = i and that the holding time in state i before going to,i
a positive, finite, integer-valued random variable. As in Kolesar, Kao assumes an occupancy d
Ai per unit time, a fixed cost c, of replacement and a variable cost Vj of replacement per unit time,
decision to replace is made when the unit is in state i. Under reasonable conditions on the cost functij
and the expected sojourn time in each state, Kao proves that a control limit policy is optimal over j
class of stationary nonrandomized policies for the long run average cost case. He then generalizes ii
model to include the time the unit has spent in the state as well as the state itself in determining wli
to replace the item.
Until now, it has been assumed that the decisionmaker is continuously aware of the state of 1
unit. That assumption will now be dropped. To ascertain the true state of the system, the decisionma^
must inspect it, which is an action entailing a particular cost. Rules for scheduling inspections now eiv
into the policy.
In an early paper, Klein [1962] was concerned with the different levels of decisions which co.<
be made upon the inspection of a unit. These decisions ranged from replace with a new unit to variu
types and degrees of repair. In addition, he considered the decision as to when the next inspecta
should be taken. He showed that the problem could be formulated as a linear fractional program a
then reduced to a linear program to be solved.
More specifically, there are costs associated with inspections and repairs. A repair moves the ui
from state j to state i at cost r(j, i); replacement is equivalent to moving it from state 7 to state 0. Tji
cost of inspection while in state j is c(j). After an inspection, the decisionmaker can choose to s|
the next m time periods before making another inspection. It is assumed that m is bounded by so<
number, say M. If the unit is discovered to have failed at some time before inspection, it is assumed tl
its failure time can be determined during the inspection.
The states are {0, 1, . . .,£,, L(l),. . . , L(M)}, where L{m) denotes that the unit failed mtii
periods ago. The action (jf, m) denotes placing the unit in state j by repair or replacement and skipping
periods until the next inspection. The transition probabilities are the same as before. For this mode 1
is assumed p tj = if i > j, p L< l{i) = Pmj, lcd = ■ ■ . =pum, usi) = 1. and pj[> > for some t and;-
i = 0, . . .,1-1.
A linear programming formulation for the long run average cost case can be found in Klein. Dernii"
[1970] shows that a stationary policy is optimal.
Much in the spirit of Derman's and Klein's models, but by using the functional equation of dynan:
programming, Eppen [1965] gives sufficient conditions on the cost functions and transition matrici
MAINTENANCE MODELS SURVEY 359
show that the total discounted expected costs are minimized from period n to the end of the finite
rizon by the following policy: there exists a sequence {i*} of critical numbers such that in period n
ithe state of the deteriorating unit exceeds i*, then return the system to state i* (by repair); otherwise
j not repair. This policy parallels the critical number policy in the single product stochastic inventory
idel without setup cost.
In an interesting paper, Hinomoto [1971] considers the sequential control of N homogeneous units
I generalizing the approach developed by Klein. At each decision point the decisionmaker determines
re level of the repair activity and the time of the next inspection. Hinomoto considers two different
I ins for the maintenance of the N units. The first plan is a fixed rotating sequence for the units, i.e.,
id an optimal permutation schedule from the set of all permutations of the numbers 1, . . ., N and
Ijn inspect (and repair, if needed) the units sequentially in the order given by the permutation. The
s';ond plan is a priority plan which alters the fixed sequencing by queueing those units for repair or
Placement which have fallen below a certain critical level of performance. (In the first plan, such a
i ! it would just have to wait its turn in the permutation sequence.) He formulates both problems as
l ! ?ar programs and finds the optimal decisions over the class of fixed cyclic inspection patterns (ex-
c )t for the temporary displacing priorities in the second plan).
I Other authors Derman [1970], Eckles [1968], Sondik [1971], and Smallwood and Sondik [1973]
a ) allow for this greater variety in the decision space by allowing repair or replacement decisions
■0, 1, . . ., L} (more will be said about their results in the next section).
I.romplete Information
! A problem which often arises in the study of maintenance is the lack of information concerning
nny aspects of the model. This lack of information may take many different forms. In his earlier
8 vey, McCall [1965] treated the case of lack of knowledge of the underlying failure distribution. In
a ition to this case, there can be lack of information due to (i) random costs or unknown costs and
(i noisy observations on the state of the system. A paper treating the costs as random variables with
k wn distribution is given by Kalymon [1972]. The papers concerning unknown costs are generaliza-
ti s by Kolesar [1967] and Beja [1969] of another basic model due to Derman [1963a]. The papers on
n^y observations are by Eckles [1968], Sondik [1971], and Smallwood and Sondik [1973]. Concluding
tr section, the papers by Satia [1968] and Satia and Lave [1973] on the lack of knowledge of the
ui erlying probability distribution are mentioned.
Kalymon [1972] generalizes the Derman model of the previous section by considering a stochastic
re acement cost determined by a Markov chain. This chain is assumed to be conditionally independent
Dine Markov chain defining the deterioration of the unit from period to period. The cost, C, of a new
ui in period t is a random variable which takes on a finite set of values. There is a separable salvage
rtie of — [r(c t ) + s(xt)] and an occupancy cost A(x t ) when the machine is in state x, and the realized
icial cost of a new machine is c, in period t.
When the cost functions c + r(c), s(x), and A{x) are nondecreasing in their arguments, the Markov
clins {X,\t = 0, 1, . . .} and {C,\t = 0, 1, . . .} have distribution functions which are IFR, i.e.,
tF/ satisfy (4) and its analog, and the conditional expected costs satisfy a certain condition, then
tr optimal policy is a control limit policy for the nonstationary finite horizon discounted cost function.
1 8 result is then generalized to the infinite horizon ergodic chains case for both the long run dis-
360 W. P. PIERSKALLA AND J. A. VOELKER
counted and the long run average cost cases. In this context, there can be a different control limit poly
for each realized replacement cost c.
In another interesting variation on his earlier basic model, Derman [1963a] considered the problti
where the unit is in one of several operative states 0,1, . . ., n and several inoperative states nr
1, . . . , L. A decision k e {1, . • ., K) is made to replace or partially repair the unit at some levei
once it is observed in states 1, . . . , n and only to replace if in states n+ 1, . . . , L. There is no explit
cost structure available nor can one be easily inferred other than that the cost of failure is much greatr
than the cost of replacement. The objective is to maximize the expected length of time between repla<-
ments, called the cycle time, subject to the constraint that the probability a replacement is made whi
the process is in state j is no greater than some preassigned number ajforj = n + l, . . ., L. Derm
develops a linear programming formulation of this model over the class of rules which repeat evej'
time the unit is renewed.
In a subsequent paper, Kolesar [1967] restricts the set of decisions to only two: replace or do n|
replace. From the class of stationary randomized rules, a generalized control limit rule is defined by
do not replace when i < m,
replace with probability — — 77: — . , ,., — : 77: when i — m,
PlTi{R m )+ (1— /3)7Tj {R m + 1)
replace when i > m,
where ^ /J ^ 1 and R m and R m + i are control limit rules (as defined previously). The coefficient
which forms the convex combination of the steady state probabilities for the two nonrandomized contil
limit rules R m and R m + 1 is obtained as the coefficient which satisfies it* = /3irj{R m ) + (1 — /3)7Tj(/? m )
L L 7Tl
and the irf minimize tto on the set V itj = 1, Y TTiPij — ttj, ttj = and — : = t, where t is the pi
j = i = 7To
determined maximum tolerable probability of failure in a cycle. Under the IFR assumption (4), Koles
establishes that the optimal stationary policy is either do not replace until state L, or else there a
control limit rules R,„ and R m + \ such that a generalized control limit rule is optimal when the maximu
tolerable probability of failure in a cycle is t.
Beja [1969] does not make the IFR type of assumption (4) on the transition probabilities. In so doii
he is able to consider more general transition matrices, e.g., the "bath-tub" variety. Lety, and*, deno
the probability of failure before replacement and the expected time before replacement given th
the present state is i, respectively. For every state i the set of constraints
yilxi ?! r, for a given t,
impose restrictions on the potential hazard encountered while in state i. With these additional co
straints on the unit, Beja shows that from the class of stationary randomized policies, one of the 2 L
nonrandomized policies (i.e., replace or keep whenever the process is in state 1, 2, . . . , L — 1)
optimal. He then demonstrates what the "implicit" replacement and failure costs are, given the optim
solution which maximizes the cycle length.
MAINTENANCE MODELS SURVEY 361
The problems of uncertainty in studying a maintenance problem are treated somewhat differently
| Eckles [1968], Sondik [1971], and Smallwood and Sondik [1973]. The underlying process for the
lit is still the finite state, discrete time Markov chain as before and there are K possible decisions
f m which a selection k is to be made. For example, these decisions could be to replace, repair or do
rthing. Let p'ljt) be the true conditional probability that the unit goes to state j in the next period
£ en that it is in state i at period t and decision k = k{t) was taken. An inspection of the unit after taking
elision k in state i during period t yields the observation x where q k xi (t) is the conditional probability
c'outcome x given state i and decision k. In this manner, it is possible to characterize maintenance-
i'pection problems ranging from those whose observation of the state reveal knowledge of the actual
Jte with certainty (as in the previous models) to those which provide no information of the new state
c the system. It is assumed (i) that the actual age of the unit in use in period t is always known with
dainty given the sample history of observations and decisions 3fu and (ii) there is an underlying cost
c t) which represents the cost of going to state j, given state i and decision k in period t . Letting
cfc)=i>y<)c5<t)
b the expected one period cost given (i, k, t), and assuming that for any sample history %f t the one-
s p transition matrix P(£t?t) and the current age of the unit tffli) are jointly a sufficient statistic for
I, then there exists an optimal solution which minimizes the expected total discounted costs where
\\ only information needed are the one-step transition matrix P(Sf t ) and the current age of the unit
Wt) and not the entire prior history S^ t . Bayes Theorem is then used to update Piffl i) to P{fflt + \)
^P(2f?t-i) if one numbers backward as is often done in finite time dynamic programming). Eckles
[qmulates the problem as a dynamic program and under the standard assumption that whenever a
lit is replaced it is completely renewed (its age is 0) and its transition probabilities are then inde-
pident of the past history of the process, he presents an algorithm for finding an optimal nonran-
Inized age replacement policy for this renewal process.
. Sondik [1971] and Smallwood and Sondik [1973] treat the same problem as Eckles; however,
ly demonstrate that the entire history of the process is contained in the information vector
TT(t) =(7ToU), • • •, 7T/.U)),
ire
TTj(t) = conditional probability that the actual state at time t
is j given observation x, history <$f,_ i and decision k{t),
that by Bayes Theorem
Siir,(f)p5(t)9j;,-m
77,0 + 1)
SjSjinWptytte*/*)
362 W. P. PIERSKALLA AND J. A. VOELKER
Thus, if ir(t) is the statistic generated by the outcomes ^F, then7r(f + 1) is a sufficient statistic ior%f t \,
This approach differs slightly from the Eckles model in that
q'c.j(t) = conditional probability that the outcome x is observed given that the true outcome
is j and that decision k was taken just prior to the inspection.
Thus, 7rU) behaves as a discrete time, continuous state Markov process. They then demonstrate tl.i
the expected total discounted cost function is piecewise linear and convex. Using this structure tly
develop an efficient algorithm which makes the rather difficult large state space problem relativy
easy to solve.
In conclusion of this subsection the case of incomplete knowledge of the probability law govern |
the system's evolution is briefly considered. As was mentioned earlier McCall [1965] surveyed this c.'e
extensively.
The optimization of a maintenance problem modelled as a Markov chain with unknown transitn
probabilities is usually approached from either a game theoretic (i.e., max-min or max-max) or fronja
Bayesian point of view. The max-min approach essentially seeks the maximum overall possible policis
of the minimum of the total expected long run discounted return with respect to all possible transits
matrices (within some set). The max-max approach is defined analogously. Satia [1968] proves thte
exists a pure, stationary policy for this latter decision process that is optimal.
Satia and Lave [1973] discuss this earlier Satia result and present an algorithm similar to Howaris
"Polioy'Improvement Algorithm. Their algorithm will converge to within any predetermined e-interj
about the true max-min solution in a finite number of iterations. Satia and Lave also formulate a Bi"<
esian approach to the problem and present an implicit enumeration algorithm for its solution.
Maintenance and Inventory Models
Most maintainability models which provide for the replacement of a unit assume that the repla ■
ment items are drawn from an infinite stock. However, for some models, this stock is not infinite; inde> I
its management becomes a control variable.
There are many inventory papers which treat stock replenishment problems for stochastically fa
ing equipment: Falkner [1969], Prawda and Wright [1972], Sherbrooke [1968, 1971], Sobel [1967], PJ
teus and Lansdowne [1974], Silver [1972], Miller [1973], Moore, et al. [1970], Demmy [1974], and Drir-
water and Hastings [1967]. These papers do not consider problems where a decision must be madei
repair, replace or inspect a unit or units. Rather they assume that once a unit has failed, it must be u
paired or replaced and the decision process is how much inventory to stock either initially, periodica^
or continuously.
To be more specific, Prawda and Wright [1972] examine a system where there may be many ide
tical units in operation. These units fail in one of two ways. The unit fails and is repairable with pre-
ability distribution Ff(') or nonrepairable with probability distribution ,%(•) in period i= 1, 2 K
pairable units go to a repair facility and after Jf periods are renewed and returned to inventory. Ntf !
repairable units are discarded. If the available inventory is insufficient to meet replacement neeci"
these needs are backlogged until inventory is available. An order for new units is placed at the begi
ning of each period and delivery is received A. periods later. They consider the two problems of hd
much to order every period to minimize the expected total discounted costs and to minimize the el
MAINTENANCE MODELS SURVEY 363
i:ted long run average cost-per-unit time. In their model, there are four costs: ordering, holding,
iirtage, and salvage. Following some earlier work in inventory theory by Veinott [1965], Prawda and
light show that for an ordering cost of c per unit and explicit quasi-con vexity of the single period
: t function, the optimal policy is a stationary single critical number in each period. That is, in period
id before ordering in this period if the state of the system determined by the stock on hand, in repair,
I on backorder is denoted by x t , then there is a number y such that
if Xi < y order y—Xt,
krwise do not order. They also consider the case where there is a setup cost K every time an order
> laced and give results on the optimal order quantities.
Taking a different approach, Sherbrooke [1968, 1971] considers the problem of determining the
k levels at each echelon of a multiechelon multiunit inventory system of repairable units. The idea
lat at various sites) = 1, . . .,./ there are repairable units of types i= 1, . . .,/ in use and in inven-
, yif and there is a central facility, 0, which maintains a buffer inventory, y, , for use at the sites
In needed. Each site including the central facility has repair capabilities. The problem is to determine
b yjj for i = 1 , . . .,/ and y=0,l, . . .,./ which minimize the total weighted expected number of units
flogged at any point in time subject to a budget constraint on repair and operating costs. The failure
f nits are independent and identically distributed (by unit type) according to a logarithmic Poisson
Sribution with constant variance to mean ratios. There is no transshipment among sites 1, . . . , J
I the repair decisions are not explicitly entered in the model (only implicitly through the logarithmic
dison failure rates). It is also assumed that there are an infinite number of repair activities, so that
^repair times are independent of the number of units being repaired. Sherbrooke shows that the gen-
nzed Lagrange multiplier approach of Everett [1963] and Greenberg and Robbins [1972] can be used
j otain near optimal solutions to the problem.
ijPorteus and Lansdowne [1974] consider the same model, but assume the failure process is Poisson,
■ yio = for all i=l,. . . , I and that the mean repair times for the repair of unit type i at sitej under-
og repair work of level / can be controlled. Using a generalized Lagrange multiplier algorithm, they
bin the spare stock quantities and mean repair times which minimize either the long run average
■ per-unit time or the long run total discounted cost.
Win a related paper also based on Sherbrooke's work, Silver [1972] determines the inventory levels,
fci>r repairable subassemblies (i = 1, . . ., /) of a major assembly. The failure process is Poisson for
fj subassembly, the failure of a single subassembly makes the entire unit inoperative, and canni-
li ation of major assemblies awaiting repair is assumed. Under the additional assumption that there
I inventory of the major assembly (i.e., yo = 0), Silver shows that the entire optimization problem is
ejrable and easily solved. In the case yo > 0, he gives a near-optimal algorithm to obtain yt for i= 0,
I- ,/■
Miller [1971] considers the optimal stocking problem also; however, in [1973] he asks the question
■tiich site should a repaired item be sent after it is repaired at a central facility given that transpor-
■n times differ and that only the central facility can repair items. He formulates a single-item multi-
■Lion model and shows by simulation that the policy of shipping the repaired item to the site which
B«es the greatest marginal decrease in the expected backorders from this one additional unit (com-
■d over the time required to transport the unit) is better than the current Air Force shipping policy.
364
W. P. PIERSKALLA AND J. A. VOELKER
Furthermore, under the assumption that repairs are instantaneous after a failure, he proves that tl
policy is optimal.
Along different lines, Derman and Lieberman [1967] consider a joint replacement and stockii
problem. Inspections are made every time period. An initial stock of N identical units are on hand. ,
the end of each period, a decision is made to replace the currently working unit or not to replace it.
a replacement is made, the new unit works at a level s with probability/, and continues to work at t!
same level until it either fails or is replaced. Its life-length is a random variable with a geometric di
tribution. If at the end of the period the unit in service has failed, it is replaced provided there are uni
still in stock. If there are none in stock, the system is down for one period of time while N units a
reordered.
The state space is described by { (n, s) : n= 1, . . .,N;s=l,2,. . . } U {0} , where n denotes tf
number of (identical) units on hand including the one in service, and 5 denotes the performance level
the unit in service. Note that the levels of service are a denumerable set. The element denotes i
units in stock and the system is down. The available actions are {1,2}, where 1 denotes no replaceme
and 2 denotes replacement (or reorder).
Using the resulting transition probabilities and the cost functions given by
gz(0)=C
gi(n,s) = g 2 (n,s) for n=l, . . .,/V;s=l,2, . . .;
where gi(n, s) is nondecreasing in s for each fixed n, Derman and Lieberman show that for a fixed/
there exists a sequence of numbers Si, S2, . • ■ , sn such that the optimal solution for minimizing tl
expected average cost-per-unit time is a stationary policy of the form
Under additional assumptions on the cost functions, in order to determine the optimal N, only
finite number of possible choices for /V need to be investigated.
This joint maintenance-inventory model is generalized by Ross [1969a] to allow a continuous sta
space and deterioration of the component from period to period. He establishes optimal policies of tl
same form as Derman and Lieberman and shows that when the inventory is 0, it is optimal to ord
/V* units.
2. CONTINUOUS TIME MAINTENANCE MODELS
In this section maintenance models are considered which do not contain the assumption that mai
tenance or inspection activity is a priori restricted to a particular discrete set of points in time. Th
is to say, in these models the actions of the decisionmaker may potentially take place anywhere on tl
continuous time axis (although there may be only a discrete number of such actions).
In certain of these models, the maintenance activity is permitted to occur as a continuous strear
That is, the decisionmaker must optimize over functions m(-) where m(t) is the maintenance expenditu
rate at time t. These models are considered first.
MAINTENANCE MODELS SURVEY 365
[ntrol Theory Models
Determination of the optimal maintenance schedule and the sale date for a unit in a deterministic
e ironment has been under study by a number of different authors. Early solutions to the problem are
|oe found in Masse [1962]. Naslund [1966] was the first to solve the problem by making use of the
nximum principle.
I In more recent work, Thompson [1968] presents a simple maintenance model which illustrates the
a>lication of the maximum principle technique to maintenance problems. The model contains the
flowing factors: the (unknown) sale date of the unit T, the present value V(T) of the unit if its sale date
JeT, the salvage value S(t ) of the unit at time t , the net operating receipts Q{t) at time t , the rate of in-
Mist r, the number of dollars spent on maintenance m(t ) at time t where maintenance refers to money
Bjnt over and above necessary repairs, the maintenance effectiveness function /U) at time t (in units
o lollars added to S{t) per dollars spent on maintenance), the obsolescence function d(t) at time t (dol-
■> subtracted from S(t)), and the production rate p at time t.
I It is assumed that d,f, and m are piecewise continuous, d is nondecreasing and/ is nonincreasing,
7 constant over time, and, for some given constant M, ^ m(t) ^ M.
The object of this model is to choose a maintenance policy m(t) and the sale date T* to maximize
VT). Thompson begins by solving for the optimal maintenance policy for a fixed sale date T.
■ He shows that the Hamiltonian is linear in control m, hence the optimal maintenance policy will be
»1 he bang-bang type (piecewise constant). The optimal maintenance policy is obtained by solving
f) = rl(p—(p — r)e' r(T ~ t) ) for the unique point T. Then
I M, t<T'
m ( t) = I arbitrary, t = T
\ o, t>r.
I It is clear that m(t), the optimal control, is a function of T. In this respect, it is piecewise constant:
bi ce, the solution of d V/d T= for T is simplified. Thompson presents the details of this procedure,
ilhtrates the model with a few examples and extends the model to the case of a variable production
1 Arora and Lele [1970] extend Thompson's model by considering the effect of technological progress.
Is was accomplished by including a term for obsolesence, due to such progress, in the state equation
JMthe salvage value of the unit.
Kamien and Schwartz [1971] consider a maintenance model which represents another extension of
n mpson's work. In their model, it is assumed that the value of the unit's output is independent of its
if while the probability of failure increases with its age. Furthermore, it is assumed that revenue and
flip value are both independent of age. The probability of failure is influenced by the amount of money
if it on maintenance according to the following differential equation:
i function u(t)e[0, 1] is the level of maintenance at time t; F (t) is the time-to-failure distribution pro-
l-d the unit is given maintenance according to the schedule u(-); F (t) is the corresponding dis-
366 W. P. PIERSKALLA AND J. A. VOELKER
tribution provided the unit receives no maintenance; and h(t)—F"(t)l(l—F(t)) is the "natuil
failure rate of the unit. The object of maintenance is to reduce the probability of failure. Expecj
revenue from the unit's output is maximized by selecting the appropriate control function, and i
optimal sale date T* is determined.
Kamien and Schwartz characterize the solution to this problem by deriving necessary coi|i
tions for the optimal sale date. In addition, they use the maximum principle to characterize necess
conditions for optimal control and in particular show that it is not of the bang-bang type. They also pr |
sufficiency of their necessary conditions. These results are of interest since the underlying processi
this case is controlled by influencing the failure rate and not the salvage value as was the case in Thoi^
son's paper.
In two recent papers by Sethi and Morton [1972] and Sethi [1973], the basic one-unit model is ;
tended to the situation of maintaining a chain of units. In addition, conditions for a changing technoij
ical environment affecting both production and maintenance requirements of future units is assum.l
In this dynamic model, prices of future units are also allowed to vary. In the first paper, a finite hori'i
I
problem is considered and a solution procedure for determination of the optimal maintenance sched'i
for each unit in the chain is derived. In addition, conditions for bang-bang control are discussed. In 1
second paper, the problem is solved where the maintenance policy is assumed to be stationary in i
sense that the same maintenance policy is applied to each unit in the chain. The optimal maintenan
schedule is characterized and again conditions for bang-bang control are presented. Finally, the c<i
putation of the optimal replacement period is posed in terms of a nonlinear programming problem.
In a related paper by Tapiero [1973], a sequence of n units is also considered. This paper is a din
generalization of Thompson's model. Characterization of optimal maintenance schedules and a <;
cussion of replacement times are presented. Tapiero demonstrates that the decision to replace a li
depends only on the relative value of the current unit and the subsequent unit. Thus a replacemens
effected when the subsequent unit becomes more profitable to operate. Tapiero refers to this conditr
as "technical obsolescence."
The above maintenance policies permitted a (piecewise) continuous rate of maintenance experi
ture through time. Consider now policies that apply a maintenance action only at discrete instants!
time; for example, replacing an item upon its reaching a certain age or upon its third breakdown. Sih
policies can nonetheless be distinguished from those in Part 1 because the event precipitating the ma-
tenance action is, in general, permitted to occur anywhere on a continuous time axis.
Age Dependent Replacement Models
In the earlier models of age replacement (cf., Barlow and Proschan, 1965) the replacement of e
unit at failure costs c> while replacement before failure costs C\ < c 2 . It was shown that if F, the <*
tribution of time-to-failure, had a strictly increasing failure rate then there existed a unique T* sih
that expected cost-per-unit time was minimized if the unit was replaced at age T* or at failure, who-
ever occurred first.
Glasser [1967] has obtained solutions to the age replacement problem for three specific distrii-
tions, the truncated normal, the gamma, and the Weibull.
Fox [1966b] showed the optimality of an age replacement policy under a total discounted c t
criterion. For a continuous and strictly increasing hazard rate, he derives an integral equation wh i
can be solved for the optimal T*.
MAINTENANCE MODELS SURVEY 357
Schaefer [1971] extends the standard age replacement model by including an age-dependent cost,
ich a cost may reflect the increasing burden of routine maintenance as the unit ages, its diminishing
oductivity, or reduced salvage value for the unit at replacement due to depreciation. Specifically,
expresses the total cost up to time t as
C(t) = c 1 N l (i) + cMt) +c 3
Nit)
^Zf + it-Smt))"
l 1 = 1
•We /ViU) is the number of replacements due to reaching the age replacement level T which have
icurred by time t, N 2 {t) is the number of replacements due to failure by time t, N(t) —Ni (t) +N 2 (t),
I > 0, Zi is min {X-,, T) where Xj would be the uninhibited life of the ith unit in the sequence of re-
jicements and T is the fixed age at which replacement is to be made, Sa- is the time of klh replacement,
id < a < 1. The goal is to minimize the long run average cost-per-unit time. By an argument similar
I the one given in Chapter IV of Barlow and Proschan [1965], it can be shown that the optimum policy
i nonrandom if the failure distribution F is continuous. The case of an exponential life distribution
1
i analyzed in more detail.
R. Cleroux and M. Hanscom [1974] considered a very similar model. One of the differences is that
t; age-dependent cost c 3 (ik) is incurred only at discrete times, the multiples of the positive constant
/Moreover, C3(ik) need not be increasing in i. For F continuous and IFR, they show that the optimal
a; replacement policy is nonrandom for an infinite time span. They then develop sufficient conditions
f the optimal replacement interval 7"* to be a finite number and to be restricted to a certain finite set.
The notion of an age-dependent cost structure is further generalized by M. R. Wolfe and R. Subra-
r nian [1974]. If the nth unit is replaced at age T the total cost incurred over the life of that unit is
Jo
W H = [Y„ + r(s)]ds + K n ,
Jo
vere Y„ and K„ are independent random variables forming renewal processes and r(-) is a differentia-
b and strictly increasing function. K„ is the cost of replacement and Y„ + r(s) is the cost rate at time
ifter the installation of the nth unit. There are no failures. In order to minimize the expected total
c t-per-unit time, the decisionmaker determines a critical threshold value c* such that when the cost
■3 exceeds c* he replaces the unit. For a particular realization of Y„, this occurs at age r~ 1 {c*—Y„).
Ficedures for determining c* are given, and for r(t) linear, they derive an explicit solution.
An example where an optimum age replacement policy is found for a two-unit redundant system
i; provided by T. Nakagawa and S. Osaki [1974]. While one of the identical units is in operation, the
oer is in standby status, immune to all failure or aging effects. When the operating unit is sent to re-
pr, either for preventive maintenance (which renews the unit) or because of failure, the standby unit
ties over. If an operating unit should fail when the other unit is still at the repair facility, the system
b omes inoperative and the most recent failure must wait its turn for repair. Under the assumption that
a reventive maintenance activity for a unit entails less mean time in repair than does the repair of a
Hi's failure, it may be advisable to routinely schedule such preventive maintenance after a unit has
b n in operation for time T. (If at that point in time the operating unit lacks a standby, preventive main-
tfance is delayed until the current repair work is completed.) Nakagawa and Osaki derive the optimal
358 w - p - PIERSKALLA AND J. A. VOELKER
value for this T under the assumptions of increasing failure rate and with regard to maximizing ij
long run proportion of time the system is operating. Their arguments utilize the regenerative proper:
of the system at those epochs when the operating unit enters the repair facility and a standby uniii
available to take over.
With very expensive and complicated systems, the failure of a single component unit would d
very reasonably call for replacing the entire system. Instead the system could be restored to operain
by replacing the single failed unit. Because the great bulk of that system's components were note
newed, the probability distribution for the system's remaining life remains essentially what it was at.ii
instant before failure. That is, the failure and subsequent repair activity often do not affect the syste'i
failure rate. The action of restoring a failed system to operation without affecting its failure rate is ca; <
minimal repair. Barlow and Hunter [1960] incorporated this notion in their Policy Type II replacemi
Model (Policy Type I was the simple age replacement model— replace at age T or failure, whiche>
comes first). A Type II Policy assumes that the unit is replaced after functioning for T units (downtn
is not included in the T). Any failures before that time would be dealt with by minimal repair. The o i
mization of a Type II policy with instantaneous minimal repair is equivalent to the age replacemi
model which incorporated an age-dependent cost of operating the unit. This age-dependent cost is <
expected cost of incurring the expense of minimal repair which depends on the unit's age via the fail e
rate. Bellman [1955] and Descamp [1965] applied dynamic programming to this problem. Sivazlr
[1973] generalized their work by permitting a positive downtime for the minimal repair followin i
failure. This downtime is random with an arbitrary distribution. Using the functional equation techni n
he derives an explicit expression for the long term total expected cost. Further, he derives necessary sc
sufficient conditions for the optimal policy to be of the "Type II" form described above.
Makabe and Morimura [1963a, 1963b, 1965] and Morimura [1970] introduce a Policy of Type I
and Type III'. Under Policy Type III, the unit is replaced at the Ath failure. The A— 1 previous failus
are corrected with minimal repair. A Policy Type III, Morimura feels, would be easier to implemenln
many practical situations than a Type II policy.
An optimal policy of Type III is shown to exist for strictly IFR distributions, both with respec^c
the criterion of expected fraction of time operating in [0, °°), called limiting efficiency; and with resp t
to what Makabe and Morimura [1963a, 1963b] call the maintenance cost rate which is defined to be
[cost-per-unit downtime] X [expected fraction of downtime]
+ [expected cost of all repairs and replacements during a unit time].
Also for distributions of strictly increasing failure rate, Morimura [1970] has shown the existen
of an optimal policy (with respect to limiting efficiency) for a larger class of policies, Type III' polici .
This class of policies is specified by two critical numbers t* and k. All failures before the Ath faile
are corrected only with minimal repair. If the Ath failure occurs before an accumulated operating tiff
of t*, it is corrected by minimal repair and the next failure induces replacement. But if the Ath faik'
occurs after t*, it induces replacement of the unit. Clearly, if t* = 0, this reduces to a policy of Type I.
Even if the replacement rule is purely according to age, it cannot always be assumed that the p.-
vision of replacement units is outside the purview of the decisionmaker. Falkner [1968] examinesi
maintenance-inventory problem where there is a single unit which is operating and it fails according"
an IFR distribution F(-). When the unit reaches a certain age or when it fails, it is replaced by anotb
MAINTENANCE MODELS SURVEY 359
iiw unit. The problem is to find both the initial number of spares, N, to produce and the age replace-
pnt policies Tj(t), j— 1, . . . , N+ 1, for the original new unit and the spares in order to minimize
I: expected total operating cost of the system over a finite time interval [0, T\. This problem is formu-
led as a dynamic program with nondecreasing costs for holding stock (A per unit), stockout penalty
per unit) and unscheduled replacement (r per unit). Under the assumption F(0) —0 Falkner shows
t it the optimal number of spares is bounded above by the greatest integer less than or equal to p/h,
Yiere h > 0. With additional assumptions on F(-) he is able to characterize the cost structure and the
sucture of the optimal age replacement policies.
In a later paper, Falkner [1969] specializes this model by removing the age replacement decision
pcess and is able to obtain stronger results on the optimal initial number of spares. He gives an
j >lication using the negative exponential failure distribution.
•ock Models
; In most maintenance models the time-to-failure random variable of a unit is considered intrinsic
■hat unit. But it is possible to take a different view — regarding the unit as being subject to exterior
;icks, each of which damages (or causes wear) in such a way that the damage accumulated up to a
a'ticular time defines the unit's probability of failure at that time.
1 For example, A-Hameed and Proschan [1973] set up the shock process as a nonhomogeneous
Plsson process. This process, combined with the probabilities f\ that the unit will survive A- shocks,
races a time-to-failure distribution for the unit. Various properties of the distribution {Pk} are
f!ted to corresponding properties in the induced time-to-failure distribution. For example, if the former
liribution is IFR, so is the latter.
| The question of optimal replacement rules in the context of a shock model has received attention
y very recently. Taylor [1973] considers a unit subject to a shock process where the decisionmaker
k'tws at all times the level of accumulated damage from the shocks. At the occurrence of a shock he
h the option of replacing the item at cost C\ . If the item should fail, it must be replaced at cost C2 > C\ .
B? shocks occur according to a Poisson process and each shock causes a random amount of damage
w ch accumulates additively. The device may fail only at the occurrence of a shock and then with a
pbability which depends on the accumulated damage. If the probability of the device failing is an
iireasing function of the accumulated damage, Taylor proves that the optimal replacement rule is of
tl following control limit form: replace the device at failure or when accumulated damage first ex-
c ds a critical control level Z*. He gives an equation which implicitly defines Z* in terms of the replace-
nr it costs and other system parameters.
Richard Feldman [1974] and [1975] has permitted a more general stochastic process to represent
tl 1 incidence of damage to the unit. In particular, both the time to the next shock and the degree of
d lage inflicted by that shock may depend on the current level of accumulated damage. He then de-
ri s the optimal control limit rule for replacement.
lr >racting Repair Activities Models
For a system composed of many unks, the repair or replacement of one unit should sometimes be
1 idered in conjunction with what happens to the other units. Below are discussed four ways in which
370 w - p - PIERSKALLA AND J. A. VOELKER
a maintenance policy either gives rise to, or exploits, interactions among the units of a system; name,
opportunistic policies, cannibalization policies, multistage replacement policies, and variable repair rj;
policies.
Opportunistic policies exploit economies of scale in the repair or replacement activity. That
two or more repair activities done concurrently may cost less than if they are done separately. So tj
necessity of performing at least one repair might provide the economic justification to do several oth(,
at the same time.
In cannibalization and multistage replacement models, units of the same type are utilized at c
ferent locations in the system. In response to a failure at one location, an identical unit may be tra;
ferred there from another location. In the multistage model, a new item must enter the system at soi,
location so that all units are restored to operation. Also in multistage models, the purpose of trar
ferring units among locations is to locate those units, which because of their age are less likely to fa
at those locations where failure is most costly. In the cannibalization model, on the other hand, no n(
item enters the system when a transfer is made. Since the performance of the system depends on whi
items are functioning, the purpose of the transfer is to provide the system with the best possible cc
figuration of functioning units.
The last type of interacting activities model to be discussed is a variable repair rate model; that
when the repair capacity of a system is limited and under the decisionmaker's control, he may wish
modify that capacity according to the number of items in a down state.
Some recent work has examined opportunistic repair in the context of the following system stri
ture: There are two classes of components, 1 and 2. Class 1 contains M standby redundant componer
so that upon the failure of the currently operating class-1 component, a standby takes over. When
the class-1 standbys have failed, the system suffers catastrophic failure. The class-2 components,
the other hand, form a series system; if one of them should fail, the system suffers a minor breakdow
The operator always knows the state of the system.
When a minor breakdown occurs, there is the opportunity for opportunistic repair of those class
items which have failed. Kulshrestha [1968a] examined such a policy under the assumption that t
class-1 units fail according to a general distribution and the ith component of class-2 fails with the c(
stant rate A, (i = l, 2, . . ., N). The time to complete repairs follows a general distribution. He th
compares this policy to the corresponding nonopportunistic policy upon assuming the failure distril
tion of the class-1 units to be exponential.
Nakamichi, et al. [1974] examine the same system, but they require that any class-1 unit enter
pair upon its own failure — not waiting for either a minor or major system failure. They further assur
that in a major breakdown, as soon as the class-1 unit being serviced is repaired, it is reinstalled p
mitting the system to operate again. And in a minor breakdown, as soon as the failed class-2 unit is
paired the system operates again. In the case where a class-2 unit fails when a class-1 unit is under
pair, two alternatives are examined. (1) It is repaired immediately, interrupting the repair of the class
unit and (2) it's repair awaits the completion of the class-1 unit's repair.
The notion of cannibalization stems from the situation where a working (operative) part in a systd
may be removed from one location to replace a failed part in another location. This "cannibalizatio
would be done in order to improve the operation of the system in some sense.
The foundations for the study of cannibalization were established by Hirsch, Meisner, and B
[1968]. Each part in the system is classified by type (1, . . ., N) and location (1, . . ., n). The set
MAINTENANCE MODELS SURVEY 371
),(S is then partitioned into subsets such that any two parts in a subset are interchangeable, but parts
nlifferent subsets are not. At any point in time, the status of all parts in their locations is given by a
>iiry ra-tuple v=(v u ■ ■ -, v n ),
vre
Vi =
imphes part i has failed
1 imphes part i is operating.
I assumed that failures are detected immediately. The state of the system is given by a monotonic
n easing "structure" function <f(v) which takes values from {0, 1, . . ., M}, where is total system
a ire and M is the best performance. In the event there are spare parts available and a failure occurs,
h failed part is immediately replaced by a spare part and the structure function is unchanged. How-
\ , if there are no spares, then part shortages exist and the problem is to find cannibahzations (trans-
i lations that make feasible interchanges of parts) which maximize <p over all feasible interchanges.
I cannibalization T which maximizes <p is called admissible. Under an assumption, known as the
inimum condition," on the composite transformation <p • T = <pT, Hirsch, Meisner, and Boll ex-
1 tly characterize the state of the system for any admissible cannibalization T as a function of the
ii ber of working parts of each type. The minimum condition asserts that <pT(v) is equal to the mini-
lii value of <pTin{v) over i = 1, . . ., N, where tt-Av) is the operation of making all parts of type j
p able (for all j ¥= i) while the status of parts of type i are held constant. Thus, in a sense, a single
a type determines the value of <pT for any v. Using the minimum condition and the explicit charac-
;iation of the state of the system, <pT, for any admissible T, they demonstrate the probability laws
f le state of the system under the additional assumptions that (i) the failure distributions of parts
fpe i are identically distributed and do not depend on the location of the part for each i = l, . . .,
', id (ii) the lifetimes of all parts are independent random variables.
Simon [1970, 1972] generalizes the results of Hirsch, Meisner, and Boll by relaxing the restric-
li on the interchangeabihty of parts. He classifies their interchangeability as closed, isolated,
nor communicating classes or parts. When the closure, isolation, or communication aspects are
ehed, then the result of Hirsch, Meisner, and Boll that all admissible cannibahzations are equivalent
o inger holds. The question becomes: from the set of admissible policies for a given v, are there poli-
ie which are "better" than others? With the additional objective of maintaining the most flexibility
>ruture cannibalization, Simon [1970] demonstrates that certain admissible policies are uniformly
l»;r than others. In another paper [1972] Simon establishes upper and lower bounds on <pT under
is lore general interchangeability rules and from these bounds he also develops bounds on the proba-
ii ' laws of the state of the system.
In a later work, Hochberg [1973] returns to the interchangeability classes of Hirsch, Meisner,
violl; however, rather than allowing each part at its location to occupy only one of two states — failed
' 'lerating (i.e., 0-1), he assumes it can be in any one of A- states
{ a '}, A =i where = a k < . . . < a% < a\ — 1.
I successively smaller state represents a decreasing level of performance. With this generalized
:siption of the status of each part at any point in time and the minimum condition, Hochberg (paral-
372 w - p PIERSKALLA AND J. A. VOELKER
leling the work of Hirsch, et al.) obtains a characterization of the state of the system <pT (i.e., the pi
formance level over admissible cannibalizations) as a function of the number of working parts at ea
level a k . He then develops the probability distribution of ipT.
Implicit in the Hirsch, Meisner, and Boll, Simon, and Hochberg works is the assumption that .
parts start operating from time zero and are continuously subject to failure as long as the system op<
ates. For many systems when a working part is in a major assembly that has failed for other reason
this working part does not experience any further deterioration or stress until it is cannibalized ai
commences to operate again. Rolfe [1970] looked at this aspect of cannibalization. He considered
group of S identical major assemblies which operate independently of each other. A major assemb
contains N distinct subassemblies (or parts). Each part is interchangeable with its corresponding pa
•
in the other major assemblies, but with no other parts. All working assemblies operate continuously f.
a period of time T before they are inspected. During this period of operation, parts may fail but the maj<
assemblies continue to function. Upon inspection, failed parts are immediately replaced from an initi
stock of spares until the stock is exhausted after which they are replaced by cannibalized parts fro
other nonworking major assemblies. Assemblies containing any failed parts are not used in the ne
period of operation. It is assumed that all parts fail independently while operating and that parts of tyj!
i have failure distribution 1 — e A| '.With these assumptions the state vector of the stochastic process;
described by the number of working parts of each type rij(t) available at the end of each operating pe
iod t. This stochastic process forms an S N + 1 state, absorbing Markov chain where the absorbing stai
I
is reached when m(t) = for some i = l, 2, . . ., N. Rolfe develops the expected number of good maj<
assemblies at the end of any period and because of obvious computational difficulties when S and/c
N are large, he gives approximations and lower bounds for this expectation.
As mentioned before, multistage models differ from cannibalization models in that new items ent
the system to replace items transferred to other locations. Bartholomew [1963] examined the followii
model: Suppose a system contains N units with independent but identically distributed times-to-failur
These N units, although stochastically identical, are partitioned into two classes, class I and class
(according, perhaps, to their function or location within the system). Upon the failure of any unit,
must be replaced — at a cost k% for items in the first class and at cost k-> for items in the second class.
The procedure Bartholomew analyzes, the so-called two-stage replacement strategy, is to repla<
all failures that occur amongst items in class II with items in class I and to replace items in cla
I, which either failed or were transferred to class II, with new items. There is a cost j8 for transferrii
one item from class I to class II. Also, there is a purchase cost per item (which turns out to be
relevant). It is assumed that replacing and transferring items takes no time.
The total number of unit failures remains unaffected by the above procedure. But the proporti
of failures in class I vis-a-vis class II may change. Bartholomew determines the following conditi
under which a two-stage replacement strategy is better than simple replacement at failure.
If p is the transfer rate from class I to class II, m is the number of units of class i, and fi is the me.
time-to-failure of a unit; then the two-stage strategy is preferable if
n 2 (k 2 - kt) /a" 1 - n lP (k 2 - A-, + 0)> 0.
The determination of p depends on whether an item from class I is selected for transfer to class II on
random basis or by an oldest-first rule. Upon assuming fi — 0, he derives an approximation for pforeai
of these cases.
MAINTENANCE MODELS SURVEY 373
One implication of his results is that for IFR time-to-failure distributions, k\ > k-2 makes the two-
age scheme preferable to simple replacement at failure. In the DFR case with k\ < k->, the two-stage
heme is also better.
Naik and Nair [1965a, 1965b] have generalized the above scheme to multistage replacement strate-
<s. Marathe and Nair [1966bJ investigate multistage block replacement strategies. This latter model
quires the assumption that there is a reserve of units, the "interstage inventory," attached to each
(ISS.
In his model of a variable repair rate problem, Crabill [1974] considers a collection of M+R units.
r ;!e state of the system, i(i = l, . . ., M + R), designates the number of these units in operating
ddition. A cost-per-unit time of C(i) is charged when in state i. Min(i, M) of the i units in operating
( idition are actually in production, and only these are subject to failure — at the constant hazard rate
\When a unit fails, it enters the single-server repair facility (or its queue). On the basis of the current
site of the system, the decisionmaker selects action k (k— 1, . . ., K) which provides the repair facility
v.h an exponential repair rate ua at per-unit-time cost n/,-. The object is to minimize long run cost-per-
u t time. Using Markov decision theory, Crabill presents sufficient conditions for particular service
r es to be eliminated from consideration regardless of state. Furthermore, he provides sufficient con-
ilions ensuring that the optimal service rate is a nonincreasing function of the system's state.
(complete Information
| As with discrete time models, the decisionmaker may lack complete information about any of the
flowing: the current state of the system (unless he performs an inspection), the probability law govern-
ii 1 the system's stochastic behavior (e.g., the time-to-failure distribution of a single unit), and the cost
inlications of particular operating policies.
When the current state of the system is not known, the problem arises of jointly determining a re-
picement and inspection policy. In a model by Savage [1962] the state of the unit moves from x =
(new unit) to x — k, k— 1, 2, . . ., according to a Poisson process. In state x, income is earned at the
■e i{x). At a cost L, the decisionmaker can inspect the unit and thereby learn its true state. After
eh inspection, he elects either to schedule another inspection T(x) time units into the future and not
nlace the unit at present, or to replace the unit (thus returning to x = 0), with the next inspection 7(0)
I e units into the future. Replacement requires r?i units of time at a cost of c per unit.
The objective of the decisionmaker is to minimize the long run average cost-per-unit time by
s reifying the "how-long-to-next-inspection" function T(x) and the set W of all states which will
c; ! for another inspection rather than replacement. Savage shows that if i(x) is nonincreasing, then
T) is strictly decreasing for x in W and bounds on T{x) are derived. More explicit results are derived
fctwo special cases of i(x).
1 Antelman and Savage [1965] consider a parallel problem, namely, the process governing the change
)f tates is Brownian Motion rather than a Poisson process. In particular, it is then possible to move to
ii mproved state without a replacement.
These models were generalized by Chitgopekar [1974]. He considers a larger (but still finite)
i< on space which includes the class of all random time-to-inspection policies while permitting a
me general stochastic process to govern the change of states. He shows that the optimal policy
I lonrandom.
074 w - p PIERSKALLA AND J. A. VOELKER
Keller [1974] utilized optimal control theory for an approximate method of selecting that inspectio
schedule, for a unit subject to failure, which will minimize cost until the detection of the first failure!
Each inspection has cost L and the cost H (t) is incurred when detection of the failure by an inspectio,
occurs time t after failure. This problem is placed in the control theory framework by assuming tha
the tests are so frequent that they can be described by a smooth density n(t) which denotes the numbe
of checks per unit time. In other words, at time t, the tests are scheduled 1/n (t ) units of time apart
He then derives an integral equation for n(t) and uses this solution to minimize the expected cos
up to detection of the first failure.
Several authors consider the problem of incomplete information concerning the time-to-failun
distribution governing the system. A single unit is subject to random failure which can be detected witl
probability p by an inspection. Each inspection has cost L and v • t is the cost of a failure which remains
undetected for a duration t . The problem is to schedule inspections so as to minimize these two cost. 1 '
over the period the unit remains installed which can be no longer than the time when failure is firs
discovered by an inspection.
Derman [1961] has found a minimax optimal solution to this problem when the time-to-failure distri
bution is totally unknown. However, it was necessary to assume a finite time horizon T; i.e., the cos
accounting stops at either the first inspection to detect failure or at time T, whichever happens first
The reason for this is that for any possible inspection schedule there exists a distribution which wouk
induce an arbitrarily high expected cost during an infinite time horizon. Hence, a minimax solutioi
would not then exist. McCall [1965] gives Derman's results.
Roeloffs [1963, 1967] (letting Derman's probability of detection, p, equal one), found the minima:
inspection schedule when a single percentile is all that is known of the time-to-failure distribution
That is, y and it are known where F (y) = tt. He finds the minimax schedule (x\ , x-z, . . ., x m + n)\
where ^ X\ =s . . . =£ x m =£ y =s x m + 1 =£ . . . ^ x m + „ ^ T, and the corresponding expected cost for a cos
structure identical to Derman's. Naturally, the expected costs are less than Derman's due to the addi
tional information. It is interesting to note that the form of his solution for selecting the inspection point;
after y, is identical to that of Derman. That is to say, the information contained in the percentile has
no bearing in carrying out a minimax surveillance after that percentile is crossed. Roeloffs also finch
the optimal inspection schedule to minimize expected cost-per-unit time (rather than per period ol
installation). However, in doing this, he sets T = y. Kander and Raviv [1974] utilize dynamic pro
gramming to model this problem for an arbitrary T.
Combining the unknown time-to-failure distribution with the traditional age replacement probleir
(cost c 2 for replacement at failure, cost C\ < o 2 for scheduled replacement), Fox [1967] used a Bayesian
approach. As the realizations of times-to-failure or scheduled replacements are progressively observed/
the decisionmaker learns more about the underlying time-to-failure distribution. Using the Weibuli
time-to-failure distribution with the gamma a priori distribution for the parameter, his objective is tc
minimize the long run expected discounted cost. He derives certain asymptotic optimality conditions
on the stationary policy for an arbitrary fixed number of replacements.
In actual maintainability applications, the exact form of the function relating expected cost to tin
control variable is often not known. In such cases it may be appropriate to precede any attempt a
optimization by an experiment which "samples" the resultant costs for various values of the contro,
variable. Then, using least-square techniques, estimate the shape of the actual cost function — thereby
permitting an inference as to the optimal value for the control variable.
MAINTENANCE MODELS SURVEY 375
Such a model was proposed by Dean and Marks [1965] and analyzed by Elandt-Johnson [1967].
i routine maintenance is provided a machine (or vehicle fleet) with frequency x per year, the average
suiting cost is C(x) = bx + D (x) where b is the average cost of a scheduled maintenance and D (x)
the unknown expected cost-per-unit time of providing emergency nonscheduled maintenance. Pre-
mably, D(x) is decreasing. C (x) is assumed to possess an absolute minimum, say at xq. For the
ecific form
D(x) =E
^^ +
•here is a random variable with distribution A^O, a 2 ) and the AkS are not known, least-square
itimates (a*) are obtained for the coefficients {Ak ■)■ By those estimated coefficients, the estimated
i,:al cost function can be minimized for io.
Elandt-Johnson [1967] provides a normal distribution which approximates the true distribution
<the statistic x . She further shows that E[C(x ) — C(x ) ] depends on the degree and coefficients of
jjp polynomial
±A k xK
REMARKS
The foregoing survey has been primarily expository rather than critical. This approach was taken
i order to be able to include most available works on maintainability in a reasonably sized monograph.
]>was felt that the more papers included, the more useful it would be to a general reader, practitioner,
i research scholar or teacher interested in entering the area.
Now that the reader has seen what has been accomplished in maintainability, the question be-
ernes, "what new work needs to be undertaken?"
In the case of deteriorating single-unit inspection, repair and/or replacement models, there have
ten many generalizations of the basic Derman models. For example, the state space has been allowed
t become nondiscrete, the costs a nonstationary random variable and the sojourn time in a state a
ridom variable. It would appear that most future research on these types of models would be to add
mre system constraints and to develop efficient algorithms to solve the large linear, nonlinear or
diamic programs involved, rather than to provide additional refinements to the assumptions of the
bsic model. Such efforts will tend to be directed toward particular problems in the sense that the
n dels will become more specialized and tailored to particular situations. Similar comments hold for
tii area of age-dependent replacement models and the maintenance-salvage models.
The area requiring the most future work is the study of multiechelon multipart interaction mainte-
nice models. This area is perhaps the most difficult to handle mathematically especially when the
ii tractions occur because of stochastic dependence among the parts. In many such cases, very little
c i be said about the operating characteristics or optimal solutions of the models. Often the only way
use problems can be handled is by simulation; consequently, few general results will be obtained in
A near future. In the long run, however, work will be initiated to handle more complex kinds of sto-
cl stic dependence, where the dependence is restrictively proscribed.
376
W. P. PIERSKALLA AND J. A. VOELKER
Multipart, multiechelon models which involve economic dependencies are often less difficult t
model and solve than the stochastic dependencies. Frequently, as in the inventory-maintenant
models, large mathematical programs result and the question becomes how to solve them rather tha
how to model or describe the problem. Future research will see an increase in the general applic;
bility of these models. Of course, making them more general and/or realistic usually removes the opt
mality of simple, elegant policies and one is dependent on large computers to solve the complicate
mathematical programs involved. Fortunately, many decisionmakers in the military branches, othe
governmental agencies, industry, and private nonprofit organizations now possess the sophisticatio
to use more complicated techniques and approaches. The phenomenal progress of space age technolog
is to be thanked for this development, as well as the need to maintain more increasingly comple
devices.
Perhaps the most future research will go into the development of new models with different cor
straints that are needed to handle maintenance problems in health, ecology, and the environment
With the advent of a national health insurance program, there will be an increased effort nationwid
on preventive medicine in order to cut costs and to extend the lifetimes of previously disadvantage'
groups. Similarly, with the depletion of many of our plant and animal resources, new concepts c
maintainability must be developed to restore balances for future populations. Finally, there are man
environmental problems that require inspection and corrective action. Models for all of these problem
will be built. Some general theory for these models will be established and large scale versions of th
problems will be solved.
ACKNOWLEDGMENT
We thank Professor Morris Cohen of the Wharton School, The University of Pennsylvania, fo
his contribution to the initial write-up of the optimal control of continuous time maintenance model
with salvage value, and Professor Evan Porteus of the Graduate School of Business, Stanford Uni
versity, for his excellent review of this work.
BIBLIOGRAPHY
[1] A-Hameed, M. S. and F. Proschan, "Nonstationary Shock Models," Stoch. Proc. Appl. 1 , 383-40
(1973).
[2] Allen, S. G. and D. A. D'Esopo, "An Ordering Policy for Repairable Stock Items," Oper. Re;
16, 669-675 (1968).
[3] Aroian, L. A., T. I. Goss, and J. Schmee, "Maintainability Demonstration Test for the Parameter
of a Lognormal Distribution," AES-747, Institute of Admin, and Mgmt., Union College, Sche
nectady, N.Y. (1974).
[4] Arora, S. R. and P. T. Lele, "A Note on Optimal Maintenance Policy and Sale Date of a Machine,'
Man. Sci. 17, 170-173 (1970).
[5] Arrow, K., D. Levhari, and E. Sheshinski, "A Production Function for the Repairmen Problem,'
The Review of Economic Studies 39, 241-249 (1972).
[6] Arunkumar, S., "Nonparametric Age Replacement Policy," Sankhya, Series A, 34, Pt. 3, 251-25*
(1972).
MAINTENANCE MODELS SURVEY 377
7] Avramchenko, R. F., "Optimum Scheduling of the Use of Spare Elements in the Loaded Mode,"
Eng. Cybernetics 8, 480-483 (1970).
8] Bakut, P. A. and Yu. V. Zhulina, "A Two-Stage Procedure of Decision Making," Automatika i
Telemekhanikafi, 156-160 (1971).
9] Bansard, J. P., J. Descamps, G. Maarek, and G. Morihain, "Stochastic Method of Replacing
Components of Items of Equipment which are Subject to Random Breakdowns: The 'Trigger'
Policy," Metra 10, 627-651 (1971).
'0] Barlow, R. E. and L. Hunter, "Optimal Preventive Maintenance Policies," Oper. Res. 8, 90-100
(1960).
|l] Barlow, R. E., F. Proschan, and L. C. Hunter, Mathematical Theory of Reliability (New York,
Wiley, 1965).
II] Barlow, R. E. and P. Chatterjee, "Introduction to Fault Tree Analysis," Oper. Res. Center,
College of Engineering, University of California at Berkeley, ORC 73-30 (1973).
[|i] Barlow, R. E. and F. Proschan, "Importance of System Components and Fault Tree Events,"
Oper. Res. Center, College of Engineering, University of California at Berkeley, ORC 74—3
(1974).
] Bar-Shalom, Y. ? R. E. Larson, and M. Grossberg, "Application to Stochastic Control Theory to
Resource Allocation under Uncertainty," IEEE Trans. Aut. Control AC-19, 1-7 (1974).
7] Bartholomew, D. J., "Two-Stage Replacement Strategies," Oper. Res. Quart. 14, 71-87 (1963).
) Barzilovich, Ye. Yu., V. A. Kashtanov, and I. N. Kovalenko, "On Minimax Criteria in Reliability
Problems," Eng. Cybernetics 9, 467-477 (1971).
|| Beja, A., "Probability Bounds in Replacement Policies for Markov Systems," Man. Sci. 16,
257-264 (1969).
1 Bellman, R., "Equipment Replacement Policy," SIAM J. on Appl. Math. 3, 133-136 (1955).
l' Bellman, R., Dynamic Programming (Princeton University Press, Princeton, N.J., 1957).
2 Bellman, R. and S. Dreyfus, Applied Dynamic Programming (Princeton University Press, Prince-
ton, N.J., 1962).
2 Bhattacharyya, M. N.. "Opt. Allocation of Stand-By Systems," Oper. Res. 17, 337-343 (1969).
2 Blanchard, B. S., "Cost Effectiveness, System Effectiveness, Integrated Logistics Support, and
Maintainability," IEEE Transactions on Reliability R-16, 117-126 (1967).
2. Blumstein, A., R. Cassidy, W. Gorr, and A. Walters, "Optimal Specification of Air Pollution
Emission Regulations including Reliability Requirements," Oper. Res. 20, 752-763 (1972).
V Bracken, J. and K. Simmon, "Minimizing Reductions in Readiness Caused by Time-Phased De-
creases in Aircraft Overhaul and Repair Activities," Nav. Res. Log. Quart. 12, 159-165 (1965).
li Brandt, E. and D. Limaye, "MAD: Mathematical Analysis of Downtime," Nav. Res. Log. Quart.
17, 525-534(1970).
2f. Brown, D. B. and H. F. Martz, Jr., "A Two-Phase Algorithm for the Maintenance of a Deteriorating
Component System," AIIE Transactions 2, 106-111 (1970).
Brown, D. B. and H. F. Martz, Jr., "Simulation Model for the Maintenance of a Deteriorating Com-
ponent System," IEEE Transactions on Reliability R-20, 28-32 (1971).
iBryson, A. E. and Y. C. Ho, Applied Optimal Control (Blaisdell, Waltham, Mass., 1969).
Chatterjee, P., "Fault Tree Analysis: Min Cut Set Algorithms," Oper. Res. Center, College of
Engineering, University of California at Berkeley, ORC 74-2 (1974).
378 w - p - PIERSKALLA AND J. A. VOELKER
[30J Cherenkov, A. P. and P. I. Popov, "Fluxes of Failures of Technological Systems Taking Accou,
of Prophilaxis," Automatika i Telemekhanika 6, 144-148 (1970).
[31] Cherenkov, A. P. and A. G. Ivanov, "An Iterational Method for Calculating the MTBF of
Markovian System," Automatika i Telemekhanika 8, 143-151 (1972).
[32] Cherniavsky, E. A., "Some Contributions to Failure Models: The Shock and Continuous Dang
Processes," Ph. D. Dissertation, Cornell University, Ithaca, N.Y. (1973).
[33] Chitgopekar, S. S., "A Note on the Costly Surveillance of a Stochastic System," Nav. Re
Log. Quart. 21, 365-371 (1974).
[34] Chu, W. W., "Adaptive Diagnosis of Faulty Systems," Oper. Res. 16, 915-927 (1968).
[35] Cleroux, R. and M. Hanscom, "Age Replacement with Adjustment and Depreciation Costs ai
Interest Charges," Technometrics 16, 235-239 (1974).
[36] Cozzolino, J., "Decreasing Failure Rate Processes," Nav. Res. Log. Quart. 15, 361-374 (1968
[37] Crabill, Thomas B., "Optimal Control of a Service Facility with Variable Exponential Servi.
Times and Constant Arrival Rate," Man. Sci. 18, 560-566 (1972).
[38] Crabill, Thomas B., "Optimal Control of a Maintenance System with Variable Service Rates
Oper. Res. 22, 736-745 (1974).
[39] Cruon, R., A. Rougerie, and C. Van de Casteele, "Scheduling Repairs and Spare Items," Re.
Frse. d'Informatique et de Recherche Operationnelle. Vol. 3, No. V— 3, 87—102 (1969).
[40] Demmy, W. S., "Computing Multi-Item Procurement and Repair Schedules in Reparable-Itet
Inventory Systems," Working Paper, Ernst and Ernst, 2300 Winters Bank Building, Daytc,
Ohio (1974).
[41] Denardo, E. and B. Fox, "Nonoptimality of Planned Replacement in Intervals of Decreasin
Failure Rate," Oper. Res. 15, 358-359 (1967).
[42] Denby, D. C, "Minimum Downtime as a Function of Reliability and Priority Assignments i
Component Repair," J. Ind. Engr. 18, 436-439 (1967).
[43] Denicoff, M., S. E. Haber, and T. C. Varley, "Military Essentiality of Naval Aviation Spare Parts'
Man. Sci. 13, B439-B453 (1967).
[44] Derman, C, "On Minimax Surveillance Schedules," Nav. Res. Log. Quart. 8, 415-419 (1961).
[45] Derman, C, "On Sequential Decisions and Markov Chains," Man. Sci. 9, 16-24 (1962).
[46] Derman, C, "Optimal Replacement and Maintenance under Markovian Deterioration wii
Probability Bounds on Failure," Man. Sci. 9, 478-481 (1963a).
[47] Derman, C, "On Optimal Replacement Rules when Changes of State are Markovian," Chap.,
Mathematical Optimization Techniques, R. Bellman (ed.), (University of California Pre:.
Berkeley and Los Angeles, 1963b), pp. 201-210.
[48] Derman, C. and M. Klein, "Surveillance of Multi-Component Systems: A Stochastic Traveli;
Salesman Problem," Nav. Res. Log. Quart. 13, 103-111 (1966).
[49] Derman, C. and G. J. Lieberman, "A Markovian Decision Model for a Joint Replacement ai
Stocking Problem, Man. Sci. 13, 609-617 (1967).
[50] Derman, C, Finite State Markovian Decision Processes (Academic Press, New York, 1970).
[51] Derman, C, G. J. Lieberman, and S. Ross, "Assembly of Systems Having Maximum Reliability
Nav. Res. Log. Quart. 21, 1-12 (1974).
[52] Descamps, R. and G. Maarek, "Maintenance and Parts Replacement," Gestion 5, 367-373 (196
MAINTENANCE MODELS SURVEY
379
3] De Veroli, J. D., "Optimal Continuous Policies for Repair and Replacement," Oper. Res. Quart.
25,89-98(1974).
4] Dishon, M. and C. Weiss, "Burn-In Programs for Repairable Systems." IEEE Transactions on
Reliability R-22, 265-267 (1973).
5] Downton, F., "The Reliability of Multiplex Systems with Repair," J. Roy. Statist. Soc, B, 28,
459-476(1966).
5] Drinkwater, R. W. and N. A. J. Hastings, "An Economic Replacement Model," Oper. Res. Quart.
78, 121-138 (1967).
\1] Eckles, J. E., Optimal Replacement of Stochastically Failing Systems (Institute in Engineering-
Economic Systems, Stanford University, 1966).
[■!] Eckles, J. E., "Optimum Maintenance with Incomplete Information," Oper. Res. 16, 1058-
1067 (1968).
[] Elandt-Johnson, R. C, "Optimal Policy in a Maintenance Cost Problem," Oper. Res. 15, 813-
819 (1967).
[■•!] Eppen, G. D., "A Dynamic Analysis of a Class of Deteriorating Systems," Man. Sci. 12, 223-
240 (1965).
[ ] Esary, J. D., A. W. Marshall, and F. Proschan, "Shock Models and Wear Processes," Ann. of
Probability 1, 627-649 (1973).
[i| Everett, H., Ill, "Generalized Lagrange Multiplier Method for Solving Problems of Optimum
Allocation of Resources," Oper. Res. 11, 399-417 (1963).
ji] Falkner, C. H., "Jointly Optimal Inventory and Maintenance Policies for Stochastically Failing
Equipment," Oper. Res. 16, 587-601 (1968).
|i| Falkner, C. H., "Optimal Spares for Stochastically Failing Equipment," Nav. Res. Log. Quart.
16, 287-295 (1969).
|(| Falkner, C. H., "Jointly Optimal Deterministic Inventory and Replacement Policies," Man.
Sci.: Theory 16, 622-635 (1970).
[f ! Fanarzhi, G. N. and D. V. Rozhadestvenskii, "Reliability of a Doubled System with Restoration
and Preventive Maintenance Service," Izvestiya Akademii Nauk SSR, Tekhnicheskaya
Kibernetika3, 61-66 (1970).
I Faulkner, J. A., "The Use of Closed Queues in the Deployment of Coal Face Machinery," Oper.
Res. Quart. 19, 15-23 (1968).
fi Federowicz, A. J. and M. Mazumdar, "Use of Geometric Programming to Maximize Reliability
Achieved by Redundancy," Oper. Res. 16, 948-954 (1968).
fEj Feeney, G. J. and C. C. Sherbrooke, "The (5 — 1, 5) Inventory Policy under Compound Poisson
Demand," Man. Sci. 12, 391-411 (1966).
f> Feldman, R. M., "Optimal Replacement for Shocks and Wear Processes," unpublished paper,
Dept. of Industrial Engineering and Management Sciences, Northwestern University, Evanston,
111. (1974).
h Feldman, R. M., "Optimal Replacement for Systems with Semi-Markovian Deterioration,"
Ph. D. Dissertation, Dept. of Industrial Engineering and Management Sciences, Northwestern
University, Evanston, 111. (1975).
Fox, B., "Markov Renewal Programming by Linear Fractional Programming," SIAM J. Appl.
Math. 14, 1418-1432 (1966a).
380 w - p - PIERSKALLA AND J. A. VOELKER
[73] Fox, B., "Age Replacement with Discounting," Oper. Res. 14, 533-537 (1966b).
[74] Fox, B., "Adaptive Age Replacement," J. Math. Anal. & Appl. 18, 365-376 (1967).
[75] Fox, B., "Semi-Markov Processes: A Primer," The RAND Corporation, RM-5803 (1968).
[76] Gertsbakh, I. B., "Dynamic Reservation Optimal Control of Switch-In of Elements," Automata
i Vychislitel'naya Tekhnika 1, 28-34 (1970).
[77] Glasser, G. J., "The Age Replacement Problem," Technometrics 9 (1967).
[78] Goss, T. I., "Truncated Sequential Test for the Variance of a Normal Distribution with App
cations to Maintainability," AES-746, Institute of Admin, and Mgmt., Union College, Sch
nectady, N.Y. (1974a).
[79] Goss, T. I., "Nonparametric Truncated Sequential Test for the Median with Application I
Maintainability," AES-745, Institute of Admin, and Mgmt., Union College, Schenectady, N.^
(1974b).
[80] Govil, A. K., "Stochastic Behavior of a Complex System with Bulk Failures and Priority Repairs.
Rev. Frse. d'Informatique et de Recherche Operationnelle, 5, No. V-l, 51-58 (1971).
[81] Greenberg, H., "An Application of a Lagrangian Penalty Function to Obtain Optimal Redui
dancy," Technometrics 12, 545-552 (1970).
[82] Greenberg, H. and W. Pierskalla, "A Review of Quasi-Convex Functions," Oper. Res. 19, 1553
1570 (1971).
[83] Greenberg, H. and T. Robbins, "Finding Everett's Lagrange Multipliers by Generalized Line;
Programming," Technical Report CP-70008, Parts I and II, Computer Science/Operatior
Research Center, Southern Methodist University, Dallas, Tex. (1972).
[84] Grinyer, P. H. and D. G. Toole, "A Note on the Theory of Replacement," INFOR 10, 107-12
(1972).
[85] Hastings, N. A. J., "The Repair Limit Replacement Method," Oper. Res. Quart. 20, 337-35,
(1969).
[86] Henin, C, "Optimal Replacement Policies for a Single Loaded Sliding Standby," Man. Sci
Theory 18, 706-715 (1972).
[87] Henin, C, "Optimal Allocation of Unreliable Components for Maximizing Expected Profit ov<
Time," Nav. Res. Log. Quart. 20, 395-403 (1973).
[88] Hinomoto, H., "Sequential Control of Homogeneous Activities — Linear Programming of Sem
Markovian Decision," Oper. Res. 19, 1664-1674 (1971).
[89] Hirsch, W., M. Meisner, and C. Boll, "Cannibalization in Multicomponent Systems and tl
Theory of Reliability," Nav. Res. Log. Quart. 15, 331-359 (1968).
[90] Hochberg, M., "Generalized Multicomponent Systems under Cannibalization," Nav. Res. Lo|
Quart. 20, 585-605 (1973).
[91] Hodgson, V. and T. L. Hebble, "Nonpreemptive Priorities in Machine Interference," Oper. Re:
15, 245-253 (1967).
[92] Hopkins, D., "Infinite-Horizon Optimality in an Equipment Replacement and Capacity E)
pansion Model," Man. Sci.: Theory 18, 145-156 (1971).
[93] Howard, R., Dynamic Programming and Markov Processes (The MIT Press, Cambridge, Mass
1960).
[94] Howard, R., "System Analysis of Semi-Markov Processes," IEEE Transactions on Militai
Electronics (Apr. 1964), pp. 114-124.
MAINTENANCE MODELS SURVEY 381
j] Howard, R., Dynamic Probabilistic Systems (John Wiley and Sons, New York, 1971).
)] Hsu, J. I. S., "An Empirical Study of Computer Maintenance Policies," Man. Sci. 15, B180-
B195 (1968).
1] Intriligator, M. D., Mathematical Optimization and Economic Theory (Prentice-Hall, Englewood
Cliffs, N.J., 1971).
t] Jacobsen. S. E. and S. Arunkumar. "Investment in Series and Parallel Systems to Maximize
Expected Life." Man. Sci. 19, 1023-1028 (1973).
>] Jacobson, L. J.. "Optimal Allocation of Resources in the Purchase of Spare Parts and/or Addi-
tional Service Channels." University of California at Berkeley. ORC 69-12 (1969).
)] Jain. A. and K. P. K. Nair. "Comparison of Replacement Strategies for Items that Fail." IEEE
Transactions on Reliability R — 23, 247-251 (1974).
[] Jain, J. P., "Overtime Concept in Provisioning of Spare Equipments," Opsearch 10, 43-49
(1973).
!] Johnson, E., "Computation and Structure of Optimal Reset Policies," J. Am. Stat. Assoc. 62,
1462-1487(1967).
[] Jorgenson, D., J. McCall, and R. Radner, Optimal Replacement Policies (Rand McNally, 1967).
[] Kabak, I., "System Availability and Some Design Implications," Oper. Res. 17. 827-837 (1969).
i] Kalman, P. J., "A Stochastic Constrained Optimal Replacement Model: The Case of Ship
Replacement," Oper. Res. 20, 327-334 (1972).
)] Kalymon, B. A., "Machine Replacement with Stochastic Costs," Man. Sci.: Theory 18, 288-298
(1972).
] Kamien, M. and N. Schwartz, "Optimal Maintenance and Sale Age for a Machine," Man. Sci. 17,
B495-B504 (1971).
] Kander, Z. and A. Raviv, "Maintenance Policies when Failure Distribution of Equipment is only
Partially Known," Nav. Res. Log. Quart. 21, 419-429 (1974).
] Kao, E., "Optimal Replacement Rules," Oper. Res. 21, 1231-1249 (1973).
l] Kaplan, S., "A Note on a Constrained Replacement Model for Ships Subject to Degradation
by Utility," Nav. Res. Log. Quart. 21, 563-568 (1974).
] Keller, J. B., "Optimum Checking Schedules for Systems Subject to Random Failure," Man.
Sci. 27,256-260(1974).
] Kent, A., "The Effect of Discounted Cash Flow on Replacement Analysis," Oper. Res. Quart.
27, 113-118 (1960).
Klein, M., "Inspection-Maintenance-Replacement Schedules under Markovian Deterioration,"
Man. Sci. 9, 25-32 (1962).
Klein, M. and R. Kirch, "Surveillance Schedules for Medical Examinations," Man. Sci. 20,
1403-1409(1974).
Kirsch, C. M., "A Method for Determining Bridge Painting Cycles," Materials Protection 4,
40-49(1966).
Koehler. G., A. Whinston, and G. Wright. "Matrix Iterative Techniques in Large Scale Linear
Programming," Proceedings of IEEE Systems, Management and Cybernetics, Boston, Mass.
(Nov. 1973a).
382 w - p - PIERSKALLA AND J. A. VOELKER
[117] Koehler, G., A. Whinston, and G. Wright, "The Solution of Leontieff Substitution Systems Us
Matrix Iterative Techniques," Working Paper, Krannert School, Purdue University, ^
Lafayette, Ind. (1973b).
[118] Koehler, G., A. Whinston, and G. Wright, "An Iterative Procedure for Non-Discounted Discr
Time Markov Decisions," Discussion Paper No. 70, Center for Math. Studies in Econ
Mgmt. Science, Northwestern University, Evanston, 111. (1974).
[119] Kolesar, P., "Minimum Cost Replacement Under Markovian Deterioration," Man. Sci.
694-706 (1966).
[120] Kolesar, P., "Randomized Replacement Rules which Maximize the Expected Cycle Lengtl
Equipment Subject to Markovian Deterioration," Man. Sci. 13, 867-876 (1967).
[121] Kolomenskii, L. V., "The Readiness Coefficient of Complex Systems," Automatika i Vyc'
litel'naya Tekhnika2, 62-65 (1970).
[122] Kopelevich, B. M., "Preventive Maintenance of Duplicate Systems," Automatika i Vychislitel'nt
Tekhnika 5, 49-55 (1969).
[123] Kopelevich, B. M., "Preventive Maintenance on Complex Systems," Automatika i Vyc:
litel'naya Tekhnika 1, 37-42 (1971).
[124] Korneichuk, V. I., "On the Effectiveness of Conditioning and Reservation of Elements," A4
matika i Telemekhanika 6, 156-160 (1970).
[125] Kulshrestha, D. K., "Operational Behaviour of a Multicomponent System Having Stand-by <
dundancy with Opportunistic Repair," Unternehmensforschung 12, 159-172 (1968a).
[126] Kulshrestha, D. K., "Reliability of a Parallel Redundant Complex System," Oper. Res.jl
28-35 (1968b).
[127] Kulshrestha, D. K., "Reliability of a Repairable Multicomponent System with Redundanc i
Parallel," IEEE Transactions on Reliability R-19, 50-53 (1970).
[128] Kumagai, M., "Reliability Analysis for Systems with Repair," J. Oper. Res. Soc, Japans
53-71(1971).
[129] Lambe, T. A., "The Decision to Repair or Scrap a Machine," Oper. Res. Quart. 25, 99-1
(1974).
[130] Lambert, B. K., A. G. Walvekar, and J. P. Hirmas, "Optimal Redundancy and Availabt
Allocation in Multistage Systems," IEEE Transactions on Reliability, R-20, 182-185 (197.
[131] Lasdon, L. S., "A Survey of Large Scale Mathematical Programming," Tech. Memo. No. '.')
Dept. of Operations Research, Case-Western Reserve University, Cleveland, Ohio (1974).
[132] Lebedintseva, E. P., "Distribution of a Function of the Cost of Losses in Problems of Coro
and Preventive Maintenance," Kibernetika 5, 118-121 (1969).
[133] Liebling, T., G. Amiot, and X. Moreau, "Maintenance of a System with Elements that have In
dom Life Times," New Techniques 13, 55-60 (1971).
[134] Luss, H. and Z. Kander, "A Preparedness Model Dealing with A 7 Systems Operating Simult.e
ously," Oper. Res. 22, 117-128 (1974).
[135] Luss, H. and Z. Kander, "Inspection Policies when Duration of Checkings is Non-Negligib.
Oper. Res. Quart. 25, 299-309 (1974).
[136] Maal0e, E., "Approximate Formula for Estimation of Waiting-Time in Multi-Channel Queudg
System," Man. Sci. 19, 703-710 (1973).
MAINTENANCE MODELS SURVEY 383
7] Mann, S. H., "On the Optimal Size for Exploited Natural Animal Population," Oper. Res. 21,
672-676(1973).
8] Marshall, A. W. and F. Proschan, "Mean Life of Series and Parallel Systems," J. Appl. Prob.
7, 165-174 (1970).
9] Marshall, A. W. and F. Proschan, "Classes of Distributions Applicable in Replacement, with
Renewal Theory Implications," Proc. Sixth Berkeley Symp. Math. Statist. Prob. I (1972),
pp. 395-415.
3] Martz, H. F., Jr., "On Single Cycle Availability," IEEE Transactions on Reliability R -20, 21-23
(1971).
1| Masse, P., Optimal Investment Decisions (Prentice-Hall, Englewood Cliffs, N.J., 1962).
I] McCall, J. J., "Maintenance Policies for Stochastically Failing Equipment: A Survey," Man. Sci.
11, 493-524 (1965).
1] McNichols, R. and G, Messer, "A Cost-Based Availability Allocation Algorithm," IEEE Trans-
actions on Reliability R-20, 178-182 (1971).
1] Meisel, W. S., "On-Line Optimization of Maintenance and Verification Schedules," IEEE Trans-
actions on Reliability /?-78, 200-201 (1969).
i] Meyer, R. A., Jr., "Equipment Replacement under Uncertainty," Man. Sci.: Theory 17, 750-758
(1971).
>] Mikami, M., "On Application of Erlang Distribution to Repairing a Group of Machines," Kyudai
Kogaku Shuho (Technology Reports, Kyushu University) 44, 1-8 (1971a).
] Mikami, M., "On General Erlang Distribution Applied to Repairing of a Group of Machines —
The Case of 2 Channels," Kyushu Daigaku Kogaku Shuho 44, 131-138 (1971b).
] Miller, B. L., "A Multi-Item Inventory Model with a Joint Backorder Criterion," Oper. Res. 19,
1467-1476 (1971).
] Miller, B. L., "Dispatching from Depot Repair in a Recoverable Item Inventory System: On
the Optimality of a Heruistic Rule," Working paper, Western Management Science Institute,
University of California, Los Angeles, Calif. (1973).
| Mine, H. and H. Kawai, "Preventive Maintenance of a 1-Unit System with a Wearout State,"
IEEE Transactions on Reliability R-23, 24-29 (1974a).
1 Mine, H. and H. Kawai, "An Optimal Maintenance Policy for a 2-Unit Parallel System with
Degraded State, IEEE Transactions on Reliability R-23, 81-86 (1974b).
i Misra, K. B., "Reliability Optimization of a Series Parallel System," IEEE Transactions on
Reliability R-21, 230-238 (1972).
Moore, J. R., Jr., "Forecasting and Scheduling for Past-Model Replacement Parts," Man. Sci.:
Appl. 18, B200-B213 (1971).
Moore, S. C, W. M. Faucett, R. D. Gilbert, and R. W. McMichael, "Computerized Selection
of Aircraft Spares Inventories," Fort Worth Div. of General Dynamics, Fort Worth, Tex.
(1970).
Morimura, H., "On Some Preventive Maintenance Policies for IFR," J. Oper. Res. Soc, Japan
12, 94-124(1970).
Morimura, H. and H. Makabe, "On Some Preventive Maintenance Policies," J. Oper. Res.
Soc, Japan 6, 17-47 (1963a).
384 w - p - PIERSKALLA AND J. A. VOELKER
[157] Morimura, H. and H. Makabe, "A New Policy for Preventive Maintenance," J. Oper. Res. Soc 1
Japan .5, 110-124 (1963b).
[158] Morimura, H. and H. Makabe, "Some Considerations on Preventive Maintenance Policies wit
Numerical Analysis," J. Oper. Res. Soc, Japan 7, 154-171 (1964).
[159] Moses, M., "Dispatching and Allocating Servers to Stochastically Failing Networks," Man. Sc
18, B289-B300 (1972).
[160] Munford, A. G. and A. K. Shahani, "A Nearly Optimal Inspection Policy," Oper. Res. Quart. 2jj
373-379 (1972).
[161] Munford, A. G. and A. K. Shahani, "An Inspection Policy for the Weibull Case," Oper. Res
Quart. 24, 453-458 (1973).
[162] Nair, K. P. K. and M. D. Naik, "Multistage Replacement Strategies," Oper. Res. 13, 279-29
(1965a).
[163] Nair, K. P. K. and M. D. Naik, "Multistage Replacement Strategies with Finite Duration <
Transfer." Oper. Res. 13, 828-835 (1965b).
[164] Nair, K. P. K. and V. P. Marathe, "On Multistage Replacement Strategies," Oper. Res. l\
537-539 (1966a).
[165] Nair, K. P. K. and V. P. Marathe, "Multistage Planned Replacement Strategies," Oper. Re;
14, 874-887 (1966b).
[166] Nakagawa, T., "The E [Number of Visits to State A" Before a Total System Failure of a Comple
System with Repair Maintenance]," Oper. Res. 22, 108-111 (1974).
[167] Nakagawa, T. and S. Osaki, "Some Comments on a 2-Unit Standby Redundant System," Kei(
Kagaku 14, 29-34 (1971).
[168] Nakagawa, T. and S. Osaki, "The Optimal Repair Limit Replacement Policies," Oper. Re;
Quart. 25,311-317(1974).
[169] Nakamichi, H., J. Fukata, S. Takamatsu, and M. Kodoma, "Reliability Considerations on
Repairable Multicomponent System with Redundancy in Parallel," J. Oper. Res. Soc, Japa)
77(1974).
[170] Naslund, B., "Simultaneous Determination of Optimal Repair Policy and Service Life," Swedis
J. Economics, Vol. 68, No. 2, 63-73 (1966).
[171] Neuvians, G. and H. J. Zimmerman, "The Determination of Optimal Replacement and MainU
nance Policies by Dynamic Programming," Ablauf und Planungsforschung 11, 94-104 (1970
[172] Nicholson, T. A. J. and R. D. Pullen, "Dynamic Programming Applied to Ship Fleet Manag'
ment," Oper. Res. Quart. 22, 211-220 (1971).
[173] Nylehn, B. and H. M. Blegen, "Organizing the Maintenance Function — An Analytical Approach)
International J. Production Res. 7, 23-32 (1968).
[174] Onaga, K., "Maintenance and Operating Characteristics of Communication Networks," Ope
Res. 7 7,311-336(1969).
[175] Osaki, S., "Reliability Analysis of a 2-Unit Standby Redundant System with Standby Failure.
Opsearch 7, 13-22 (1970a).
[176] Osaki, S., "System Reliability Analysis by Markov Renewal Processes," J. Oper. Res. Soc
Japan 12, 127-188 (1970b).
[177] Osaki, S., "Reliability Analysis of a Standby Redundant System with Preventive Maintenance.!
Keiei Kagaku 14, 233-245 (1971).
MAINTENANCE MODELS SURVEY 385
] Osaki, S., "An Intermittently Used System with Preventive Maintenance," Keiei Kagaku 15,
102-111 (1972).
] Osaki, S. and Asakura, "A Two-Unit Standby Redundant System with Repair and Preventive
Maintenance," J. Appl. Prob. 7, 641-648 (1970).
] Osaki, S. and T. Nakagawa, "Optimal Preventive Maintenance Policies for a 2-Unit Redundant
System," IEEE Transactions on Reliability R-23, 86-91 (1974).
] Polyak, D. G., "Calculation of the Reliability of Restorable Systems Taking Account of Work
Shifts of Repair Personnel," Automatika i Telemekhanika 12, 103-109 (1968).
] Port, S. C, "Optimal Procedures for the Installation of a Unit Subject to Stochastic Failures,"
J. Math. Anal. & Appl. 9, 491-497 (1964).
] Port, S. C. and J. Folkman, "Optimal Procedures for Stochastically Failing Equipment," J.
Appl. Prob. 3, 521-537 (1966).
J Porteus, E. and Z. Lansdowne, "Optimal Design of a Multi-Item Multi-Location Multi-Repair
Type Repair and Supply System," Nav. Res. Log. Quart. 21, 213-238 (1974).
] Proschan, F., "Recent Research on Classes of Life Distributions Useful in Maintenance Model-
ing," Florida State University Statistics Report M291. AFOSR Tech. Report No. 26 (1974).
] Proschan, F. and T. A. Bray, "Optimal Redundancy Under Multiple Constraints," Oper. Res.
13, 800-814 (1965).
] Quayle, N. J. T., "Damaged Vehicles -Repair or Replace," Oper. Res. Quart. 23, 83-87 (1972).
] Rangnekar, S. S., "Effect of Inflation on Equipment Replacement Policy," Opsearch 4, 149-163
(1967).
] Reinitz, R. C. and L. Karasyk, "A Stochastic Model for Planning Maintenance of Multi-part
Systems," from Proc. of the Fifth International Conf on O. R.,}. Lawrence, ed. (1969) 703-713.
] Roeloffs, R., "Minimax Surveillance Schedules with Partial Information," Nav. Res. Log. Quart.
10, 307-322 (1963).
] Roeloffs, R., "Minimax Surveillance Schedules for Replaceable Units," Nav. Res. Log. Quart.
14, 461-471 (1967).
] Rolfe, A. J., "Markov Chain Analysis of a Situation where Cannibalization is the Only Repair
Activity," Nav. Res. Log. Quart. 17, 151-158(1970).
] Roll, Y. and P. Naor, "Preventive Maintenance of Equipment Subject to Continuous Deteriora-
tion and Stochastic Failure," Oper. Res. Quart. 19, 61-73 (1968).
] Rose, M., "An Investment Model for Repairable Assets: The F — FCase," Research Contribution
31, Institute of Naval Studies, Center for Naval Analysis, Arlington, Va. (1969).
] Rose, M., "Computing the Expected End-Product Service Time Using Stochastic Item Delays,"
Oper. Res. 19, 524-540 (1971).
] Rose, M., "Determination of the Optimal Investment in End Products and Repair Resources,"
Nav. Res. Log. Quart. 20, 147-159 (1973).
] Rose, M. and L. Brown, "An Incremental Production Function for the End-Item Repair Process,"
AIIE Trans. //, 166-171 (1970).
] Ross, S., "Non-Discounted Denumerable Markovian Decision Models," Annals of Math. Statist.
39, 412-423 (1968a).
] Ross, S., "Denumerable Markovian Decision Models," Ph. D. Dissertation, Department of
Statistics, Stanford University, Stanford, Calif. (1968b).
3g6 W. P. PIERSKALLA AND J. A. VOELKER
[200] Ross, S., "A Markovian Replacement Model with a Generalization to Include Stocking," Ma
Sci.: Theory 75, 702-715 (1969a).
[201] Ross, S., "Average Cost Semi-Markov Decision Processes," Oper. Res. Center, University
California at Berkeley, ORC 69-27, Berkeley, Calif. (1969b).
[202] Ross, S., Applied Probability Models with Optimization Applications (Holden Day, San Francisc
Calif. (1970).
[203] Ross, S., "On Time to First Failure in Multicomponent Exponential Reliability Systems," Ope
Res. Center, University of California at Berkeley, ORC 74-8, Berkeley, Calif. (1974).
[204] Ruben, R. V., "The Economically Optimal Period of Preventive Maintenance of a System wi
Possible Dislocations," Automatika i Vychislitel'naya Tekhnika 2, 30-34 (1971).
[205] Sackrowitz, H. and E. Samuel-Cahn, "Inspection Procedures for Markov Chains," Man. S>
27,261-270(1974).
[206] Satia, J., "Markovian Decision Processes with Uncertain Transition Matrices and/or Probabilis!
Observation of States," unpublished Ph. D. Dissertation, Stanford University, Stanford, Ca!
(1968).
[207] Satia, J. and R. Lave, "Markovian Decision Processes with Uncertain Transition Probabilities"
Oper. Res. 21, 728-740 (1973).
[208] Savage, R., "Surveillance Problem," Nav. Res. Log. Quart. 9, 187-209 (1962).
[209] Savage, R. and G. R. Antelman, "Surveillance Problems: Weiner Processes," Nav. Res. Lc
Quart. 72,35-55(1965).
[210] Scheaffer, R. L., "Optimum Age Replacement Policies with an Increasing Cost Factor," Techn
metrics 73, 139-144(1971).
[211] Schrady, D. A., "A Deterministic Inventory Model for Repairable Items," Nav. Res. Log. Qua
74,391-398(1967).
[212] Schwartz, A., J. Sheler, and C. Cooper, "Dynamic Programming Approach to the Optimizatii
of Naval Aircraft Rework and Replacement Policies," Nav. Res. Log. Quart. 78, 395-414 (197
[213] Schweitzer, P. J., "Optimal Replacement Policies for Hyperexponentially and Uniformly Di
tributed Lifetimes," Oper. Res. 75, 360-362 (1967).
[214] Scott, M., "On the Number of Tasks by a Machine Subject to Breakdown," Cah. Cent, d'r
Rech. Oper. 9, 44-56 (1967).
[215] Scott, M., "A Problem in Machine Breakdown," Unternehmensforschung 72, 23-33 (1961
[216] Scott, M., "Distribution of the Number of Tasks by a Repairable Machine," Oper. Res. 2'
851-859(1972).
[217] Serfozo, R., "A Replacement Problem Using a Wald Identity for Discounted Variables," Ma
Sci. 20, 1314-1315 (1974).
[218] Serfozo, R. and R. Deb, "Optimal Control of Batch Service Queues." Adv. Appl. Prob. 5, 340-3(
(1973).
[219] Sethi, S., "Simultaneous Optimization of Preventive Maintenance and Replacement Policies f
Machines: A Modern Control Theory Approach," AIIE Transactions 5, 156-163 (1973).
[220] Sethi, S. and T. E. Morton, "A Mixed Optimal Technique for Generalized Machine Replac
ment Problem," Nav. Res. Log. Quart. 79, 471-482 (1972).
[221) Sherbrooke, C. C., "METRIC: A Multi-Echelon Technique for Recoverable Item Control
Oper. Res. 76, 122-141 (1968).
MAINTENANCE MODELS SURVEY 387
| Sherbrooke, C. C, "An Evaluation for the Number of Operationally Ready Aircraft in a Multi-
level Supply System," Oper. Res. 19, 618-635 (1971).
| Shershin, A., "Mathematical Optimization Techniques for the Simultaneous Apportionment of
Reliability and Maintainability," Oper. Res. 18, 95-106 (1970).
| Silver, E., "Inventory Allocation Among an Assembly and Its Repairable Subassemblies," Nav.
Res. Log. Quart. 19, 261-280 (1972).
| Simon, R. M., "The Reliability of Multistate Systems Subject to Cannibalization," Report AM-
69-1, School of Engineering and Applied Science, Washington University, St. Louis (1969).
Simon, R. M., "Optimal Cannibalization Policies for Multicomponent Systems, SIAM J. Appl.
Math. 19, 700-711 (1970).
Simon, R. M., "The Reliability of Multicomponent Systems Subject to Cannibalization," Nav.
Res. Log. Quart. 19, 1-14 (1972).
Simon, R. M. and D. A. D'Esopo, "Comments on a Paper by S. G. Allen and D. A. D'Esopo: 'An
Ordering Policy for Repairable Stock Items'," Oper. Res. 19, 986-989 (1971).
Sivazlian, B. D., "On a Discounted Replacement Problem with Arbitrary Repair Time Distribu-
tion," Man. Sci. 19, 1301-1309 (1973).
Smallwood, R. and E. Sondik, "The Optimal Control of Partially Observable Markov Processes
over a Finite Horizon," Oper. Res. 21, 1071-1088 (1973). '
Sobel, M. J., "Production Smoothing with Stochastic Demand and Related Inventory Problems,"
Tech. Report No. 17, Dept. of Operations Research and the Dept. of Statistics, Stanford
University, Stanford, Calif. (1967).
Soland, R. M., "A Renewal Theoretic Approach to the Estimation of Future Demand for Replace-
ment Parts," Oper. Res. 16, 36-51 (1968).
Solov'ev, A. D., "Reservation with Rapid Recovery," Izvestiya Akademii Nauk SSSR. Tekh-
nicheskaya Kibernetika, No. 1, 56-71 (1970).
Sondik, E., "The Optimal Control of Partially Observable Markov Processes," Ph. D. Disserta-
tion, Dept. of Eng. Econ. Systems, Stanford University (1971).
Srinivasan, S. K., "The Effect of Standby Redundancy in System's Failure with Repair Mainte-
nance," Oper. Res. 14, 1024-1036 (1966).
Srinivasan, S. K., "First Emptiness in the Spare Parts Problem for Repairable Components,"
Oper. Res. 16, 407-415 (1968).
Srinivasan, S. K. and M. Gopalan, "Probabilistic Analysis of a 2-Unit System with a Warm
Standby and a Single Repair Facility," Oper. Res. 21, 748-754 (1973).
Tahara, A. and T. Nishida, "Optimal Replacement Policies for a Repairable System with Mar-
kovian Transition of State," J. Oper. Res. Soc, Japan 16, 78-103 (1973).
Tapiero, C. S., "Optimal Simultaneous Replacement and Maintenance of a Machine vvith Process
Discontinuities," Rev. Frse. d'Informatique et Recherche Operationnelle, No. 2, 79-86(1971).
Tapiero, C. S., "Optimal Maintenance and Replacement of a Sequence of rc-Machines and
Technical Obsolescence," Opsearch 10, 1-13 (1973).
Taylor, H. M., "Optimal Replacement Under Additive Damage and Other Failure Models,"
unpublished paper, Oper. Res. Dept., Cornell University, Ithaca, N.Y. (1973).
Taylor, J. and R. R. P. Jackson, "Application of Birth and Death Processes to Provisions to
Spare Machines," Oper. Res. Quart. .5, 95-108 (1954).
388 W. P. PIERSKALLA AND J. A. VOELKER
[243] Thompson, G. L., "Optimal Maintenance Policy and Sale Date of a Machine," Man. Sci.: Theoi
14, 543-550 (1968).
[244] Tillman, F. A., "Optimization by Integer Programming of Constrained Reliability Problerr
with Several Modes of Failure," IEEE Transactions on Reliability R-18, 47-53 (1969).
[245] Tillman, F. A. and J. M. Liittschwager, "Integer Programming Formulation to Constraine
Reliability Problems," Man. Sci. 73, 887-899 (1967).
[246] Turban, E., "The Use of Mathematical Models in Plant Maintenance Decision Making," Mai
Sci. 13, 342-358 (1967).
[247] Veinott, A. F., Jr., "Optimal Policy for a Multi-Product, Dynamic, Nonstationary Inventoi
Problem," Man. Sci. 12, 206-222 (1965).
[248] Veinott, A. F., Jr., "Extreme Points of Leontieff Substitution Systems," Linear Algebra and Ii
Applications 1, 181-194 (1968).
[249] Vergin, R. C, "Scheduling Maintenance and Determining Crew Size for Stochastically Failir,
Equipment," Man. Sci. 12, B52-B65 (1966).
[250] Vergin, R. C, "Optimal Renewal Policies for Complex Systems," Nav. Res. Log. Quart. 1.
523-534 (1968).
[251] von Ellenrieder, A. and A. Levine, "The Prob. of an Excessive Nonfunctioning Interval," Ope
Res. 74,835-840(1966).
[252] Washburn, L. A., "Determination of Optimum Burn-In Time: A Composite Criterion," "IEE
Transactions on Reliability R-19, 134-140 (1970).
[253] Weiss, G., "On Some Economic Factors Influencing a Reliability Program," NAVORD 425i
Naval Ordinance Laboratory, White Oak, Md. (1956).
[254] Wolff, M. R., "Optimale Instandhaltungspolitiken in Einfachen Systemen," Lecture Notes i
Oper. Res. 78, Springer, Berlin (1970).
[255] Wolff, M. R. and R. Subramanian, "Optimal Readjustment Intervals," Oper. Res. 22, 191-15
(1974).
[256] Woodman, R. C, "Replacement Policies for Components that Deteriorate," Oper. Res. Quar
78,267-281(1967).
[257] Wright, G. P. and J. Prawda, "On a Replacement Problem," Cah. Cent. d'Et. Rech. Oper. /•
43-52 (1972).
[258] Zachs, S. and W. Fenske, "Sequential Determination of Inspection Epochs for Reliability Sy
terns with General Lifetime Distributions," Nav. Res. Log. Quart. 20, 377-386 (1973).
[259] Zak, Yu. I., "A Problem of Determination of the Optimal Sequencing of Readjustments of Equi|
ment," Kibernetika, 6, 86-92 (1968).
MARKOVIAN DETERIORATION WITH UNCERTAIN
INFORMATION -A MORE GENERAL MODEL
Donald Rosenfield
State University of New York
at Stony Brook
Stony Brook, N.Y.
ABSTRACT
A model of a deteriorating system with imperfect information is considered. The struc-
tures appropriate for such a model include failing machinery and depleted inventory sys-
tems. In an effort to add a new dimension to such models, it is assumed that the operator
must pay an inspection cost to determine the precise state of the system. At the start of every
time period, the operator is faced with three choices: repair, no action, or inspection. Under
fairly general assumptions, the optimal policy for repair is found to be straightforward and
intuitive. This result has two important areas of application.
INTRODUCTION
This paper is concerned with finding the form of optimal policies for deteriorating Markov processes
i imperfect information, i.e., where the state is partially observable. If the state is partially ob-
able, then the operator has some information about the state, but he cannot precisely determine
state. Part of his task is determination of when an inspection cost should be paid to determine the
..
Examples of such processes include deteriorating machinery or an inventory system being depleted.
achine, for example, could fail or deteriorate to the point where large losses of production occur.
aventory systems, processes deterioriate in the sense that stock becomes depleted, resulting in
^possibility of unfilled orders. In both of these cases, the manager must decide on an inspection and
I ir policy to minimize his losses.
In the model presented, the process undergoes deterioration according to a discrete-time Markov
iess, and operating and repair costs are presumed to increase with state number.
Models such as those in [2] and [7] analyzed such processes, and assume that the operator has
?ct knowledge of the state of the system. Under such an assumption, the operator is concerned
l the optimal times to repair the machine and the optimal times to take no action. In other words,
i r the perfect information assumption, the operator must decide between two actions. Under the
smption that the state is partially observable, the operator has a choice of three actions: repair,
ction, or inspection. Upon inspection, the operator pays an inspection cost and determines the
ise state of the system. It is felt that the assumption of imperfect information adds a different
nhsion to the general problem.
Following models in [10], [17] and [18], the author [15] introduced a general model that incorporated
1 sumption of imperfect information. The author also obtained definitive policy results for the model.
389
390
D. ROSENFIELD
The present paper is a followup to that paper in that it presents weaker results under weaker assumi
tions on the transition matrix.
The result of this paper states that the optimal region for repair is of a certain intuitive type (se.
Figure 1). The formal definition of such a policy is presented in section I.
REPAIR
NO ACTION
or
INSPECTION
NUMBER
OF TIME
PERIODS
SINCE
REAL
l_ STATE
KNOWN
— i 1 1 1 1 1 1 — *n —
N
LAST REAL STATE KNOWN WITH CERTAINTY
Figure 1. A monotonia and optimal policy.
1. MODEL AND RESULTS
We have an underlying discrete-time Markov process, representing, for example, a deterioratin
machine or an inventory system undergoing depletion. The process is characterized by the state
(referred to as real states) 0,1, . . . , N, and transition matrix P. Under the structure that we have intrc
duced, when the process is in "observed" state (i, k), then the operator knows that k time units age
the process was in real state i. Furthermore, the operator has not obtained any additional informatio
in the past k time units. Under the repair option, the operator assumes an expected repair cost
S^
where Cj, 7=0, . . . , N is the repair cost for real state;, and the process reverts to state (0, 0). (W
are using pjj. as the element ij in the matrix P k . This value is equal to the a posteriori probability tha,
the real state is j given an observed state (i, k).) Under the other two options, the operator assume
an expected operating cost
where Lj , j = 0, 1 , . . . , N is the one-period operating cost for state j. Under the inspection option, th
operator assumes also an inspection cost M, and the process reverts to state (j , 0) at the end of the cui
rent time period with probability pf. + '. Note that inspection is perfect in that it gives the operator th
precise state. Under the final option (no action), the process undergoes transition to state (i, k+\). 1
is seen that the space (i, A;), i = 0, 1, . . ., N, A: 3=0 is sufficient to describe the operator's knowledge ci
the process.
MARKOVIAN DETERIORATION MODEL 391
The model of Derman [2] was an important influence on this paper and its predecessor. Derman
ssented a multistate model with perfect information and proved the existence of an intuitive optimal
licy. Kolesar [11] extended the result. Girshick and Rubin [7] and Taylor [20] looked at two-state
idels where an observable quality characteristic is recorded each time period. Girshick and Rubin
Ived this problem, but incorrectly hypothesize optimal policy when observation of the quality char-
teristic is a costly option. Taylor found a counterexample to their hypothesis. Such models with three
tions (repair, inspection, or no action) have been examined by Smallwood and Sondik [18], who
jvide a computational algorithm for solving the finite-horizon case, Klein [10], who formulates the
)blem as a linear program, Emmons [5], who introduces a model similar to this author's, and Ross
'], who presents a generalized model and obtains definitive policy results for the case of two real
ites. In the recent paper of this author [15], the general model of the present paper was first set
th. The author also found definitive policy results for the model. Emmons [5] showed the optimal
?ion for repair as i varies for case A: = 0, but this result is less inclusive than the results that follow.
We now define a monotonic policy (Figure 1):
DEFINITION: Suppose a policy is defined by numbers k*(i) nonincreasing in i such that for state
k) a repair is made if k 5s k*{i), and no action or inspection is called for otherwise. Then we call
; policy a monotonic policy and the numbers k*(i) critical numbers.
The major result of this paper is that a monotonic policy is optimal under fairly general assump-
ns. This optimality holds under both the discounted and average-cost criteria. It is felt that a mono-
lic policy is an intuitive one.
In reference to this result, some comparison should be made to the results of the previous paper of
! author [15]. There, it is shown that a monotonic policy is optimal under stronger assumptions. In
dition, the optimal region for inspection is also shown. A monotonic policy is concerned only with
! optimal region for repair. The optimality of the detailed inspection structure of the previous paper
Is not been shown under the new assumptions, except for a subcase that we discuss later. The optimal-
of a monotonic policy, however, is shown under much more general assumptions than in [15].
The assumptions that we make are:
Cj and Lj are nondecreasing in j
Cj — Lj is nonincreasing in j
There is a number 8 > and an integer r such that min pL 2^ 8 for any recurrent state j.
o - i s .v
N
a) V p,j is nondecreasing in i, k—0, 1, . . . , N (increasing failure rate, or IFR) and
j = k
b) for any vector F—{Fj, j=0, 1, . . . , N: Fj is nonincreasing in j}, (P -F) i^ FjV(ll • F) ,
i=l,. . , N— 1, where II is the vector of limiting state probabilities and V denotes the maximum
operator.
Assumptions 1 and 2 are identical to the corresponding assumptions in [15] and formalize the ideas
t the higher numbered states are costlier and that repair improves relative to no action as the state
nber increases. Assumption 3 is analogous to Assumption 3 of [15] and insures one ergodic class.
only real difference is the matrix assumption, Assumption 4, which is a weaker assumption than
umption 4 of [15]. The first part of the Assumption, the IFR assumption, is an intuitive notion
eloped by Derman [2J that implies that the mass tends to the higher numbered states. The second
392 D. ROSENFIELD
part of Assumption 4 is a new concept developed by the author. The condition can be checked b>
solving N — 1 single constraint linear programs (see page 91 of [14]). Note in particular that the conditioi
is trivially satisfied for a process with two real states (/V=l), and is also satisfied if P is upper tri
angular. The two-state process is a special case that we shall examine later.
Under these assumptions, the optimal policy is often a monotonic policy. The determining condi
tions can be expressed in terms of the costs. In Theorem 3.1.2 and Corollary 3.1.1 of the author's
technical report [14], conditions for trivial (perpetual no action) optimal policies are given. Assume tht
optimal policy is not the trivial one. Then the major theoretical result of this paper is: If the limiting
optimal action in state (N, k) as k — * °° is repair, an assumption dependent on the costs Cj, Lj, and A/
then the optimal policy is a monotonic one. We shall present sufficient conditions on the costs for in
suring that the limiting optimal action in state (N, k) is indeed repair. The result is of particulai
importance for two special cases.
2. THE DISCOUNTED CASE
We first present the mathematical definition relating to the concept of limiting action in (i, k) as,
k~ * oo. We define Fa,- as the difference between the discounted cost choosing repair initially and th<
discounted cost using no action initially and subsequently following an optimal policy in both cases
We denote the discount factor as a and note that =£ a < 1. We define dk, similarly, as the cost o i
repair minus the cost of inspection. Both differences are functions of the present state (i, k), and theii
signs determine optimal action at (i, k).
Given the recursive expression for C(i, k), the optimal discounted cost over an infinite horizor
when starting in state (i, k), we can determine the values of F,a and G,/,. These values are obtained b\
examining the terms of the expression for C (i, k):
C(i,k) = Mmlj^pf.Cj + aC(0,0), £ p^Lj + M+a £ p* +1 C(/, 0), £ p^Lj + aCd, k+ 1)}.
L j=0 j=0 j=0 j=0
The three components in the minimization correspond to the choices of repair, inspection, and no action
and 0=£a< 1 is the discount factor. Recursive expressions for optimal discounted cost are due tc
Taylor [20] and depend on the existence of stationary nonrandom optimal policies, which is due U
Blackwell [1] and Derman [3].
By examining the expression, we see that
F lk =^p>:.(C j -L J ) + a(C(0,0)-C(i,k+l))
j =
and
G ^= t P?j(Cj-Lj) -M+aC(0, 0)-af p*+ iCU 0).
We analogously denote C"{i, k) as the optimal discounted cost over a finite horizon. An analogous
expiession exists for the C"{i, k) expressed recursively in terms of C"~ l {i, k). From Taylor [20], wi
MARKOVIAN DETERIORATION MODEL 393
» the geometric convergence of C"(i, k) to C(i, A). From the ergodicity assumption, it follows
ictively that C" (i, k) has a limit as A—* °° (denoted by C" ) that is identical for all i. From the geo-
ric convergence of C"(i, k) to C(i, k) (and hence uniform convergence), we see that C{i, k) con-
ges to C = lim C" as k~* «. Consequently,
F,= lim Ftt^njCCj-LO + aCCO.OJ-aC,
*^=° j =
G^= lim G ik =^n j (C j -L j )+aC(0,0)-M-a^n j C(j,0).
*^"» j = j=0
;n we say that the limiting optimal action in (N, k) (or (i, A:)) as A— »°° is repair, we mean that
£ and G =£ 0. The concept is important because we use the limiting action of repair to prove the
>r result.
Note that the validity of these conditions F^. =£ 0, G^ =£ depend on the values of {C(j, 0)},
C— {Cj}, L— {Lj}, and M. We assume F^^O and G^ =£ to obtain our result. Sufficient condi-
5 for these on C, L, and M, will be presented at the end of the section.
We precede the major result by some lemmas and the following definition. If #/,-— >fi x is a con-
ent sequence, then we say that Bk satisfies property A when
B k + i^ max (B k , lim B k ) .
LEMMA 1: Under the assumptions, if Gj is nonincreasing, then
j =
fies property A in A" and is nonincreasing in i.
PROOF: Since P IFR — > P lx IFR, the proof of monotonicity in i is due to Derman [2] (Lemma, p.
Hok S 5 V YijHjk since Hjk is nonincreasing in j.
3 =
N
— V rijGj by definition of Hjk and by
j = changing the order of summation,
v
Ho, k+i— ^ PojHjk =* //oa since Hjk is nonincreasing in j.
larly,
394 D. ROSENFIELD
and
N .V
H N , /, + !=£ ^ TljHj, fc+i= 2) ^Gj .
j=0 j=0
Finally, for 1 ^ i ^ TV— 1,
//,., +1 =i p ij // J ,^;^(|;n j // j ,)=//,F(fn J G J )
j = o V j = o 7 X j = o >
since //^ is nonincreasing in i and F satisfies Assumption 4.
LEMMA 2:
(a) Let Fk— {Max Bjk} where 5 is a finite set and Bjk satisfies property A in A:. Then Fk satisfii
property ,4 in k.
(b) Let F" be a sequence of functions that satisfy Property A in k and denote F^ = lim Fj\
A — ► °°
Assume that F£ = lim F' k ' exists and that
lim F* = lim F".
k oo
It— »oo n — ► 0°
Then F* satisfies property A in A:.
PROOF:
(a) Notice that
lim Fa = max { lim Bjk}
k^>cc j ( B A;-»°o
so that
•
F/, + i = Max Bj, k +i ** Max Max {Bjk, lim Bjk} by property ,4
j j k - oo
Max {Max fi jA -, Max lim B jk }
j j fc^oc
= Max {F fc , lim F A };
(b) F* +1 = lim F» +] =£ lim Max {Fg, F^} by property /I
n ^ oo n^oo
= Max {lim F", lim F^}
n _oo n_oo
Max (Fjf- hm F A *).
MARKOVIAN DETERIORATION MODEL 395
ire now ready to prove:
THEOREM 1: Assume F =£ and G =£ 0. Then a stationary, monotonic policy is optimal for the
>unted problem. Furthermore, it is optimal to repair in state {N, k) , all k.
PROOF: We first assert that
me[l,m{n)] I p^ J J J
e m(n) is some finite value and B'} n - is some nondecreasing function in j.
We show (1) by induction. (1) is trivial for n—\. For case n,
C" (i, k) = Min ( 2 pg Cj + aC»~i (0, 0) , a £ p§ L } + M +
L j=o j=0
af pfiC»-i 0",0), 2p()4 + «C«-' (i,A+l).|
i=0 i=0 J
ip^ + aC ""' (*\*+l)= Min (V p§(l,- + aY
j=0 »i*|[l,m(n-l)] \ j =0 \ /=0
2 pr 1 c ' i_i o', o) - j P § (ift/C"- 1 (/, o) )
j=0 j=0 \/=o I
tituting these into (2) gives (1).
\lso, by induction on n and Lemma 1,
C" (i, A;) is nondecreasing in i,
1US
C (i, k) is nondecreasing in i.
j=o L \ j=0 ' J
i increasing in i and satisfies Property ^ in k by Lemma 1.
396 D. ROSENFIELD
Also,
F ik =\imF* ik =]im {p%{Cj-Li) + a(C"-HO,0)-C»-HC, k+1))}
= lim Max \ y pUC J -Lj + aC n - 1 (0,0)
i=o
7=0 / J
Nc
limF? A = Y Uj(Cj-Lj) + a(C- 1 (0, 0) -cr 1 ).
Since
lim Fik= lim lim Fjj;.,
A'-»oo n-*x A"-*oo
then by Lemmas 1 and 2, C(i, k) is nonincreasing in i and satisfies property A in k. Now pick any i
By property A, as A- is varied, F%k is nonincreasing until F,a goes below F*. Beyond that point
F ik « F* « 0.
Thus, there is a
£= Min {k: Fi k ^0},
such that for k^k, F,/,- *s 0, and thus repair is just as good or better than no action. An analagous valu
can be found for Go,-. Thus we can find critical numbers k*(i), such that repair is optimal for k 3= k*{i)
The monotonicity of k*(i) follows directly from the monotonicity of F,a and G,a- in i. Finally,
C*(N, 1) = Min { ^ pvjCj^p^j } 2= Min { f p m 2p%u
N IV -I
2 p -vj 2 PjT^' b y Lemma 1
= C l (N, m+1).
MARKOVIAN DETERIORATION MODEL
397
C l (N, 1) s*limC l (N, k)=Ci
A" _ 00
nduction and carrying to the limit,
C(N, 1) ^C*.
F NO = C N -L N + aC(0, 0)-aC(N, 1)=£F,
6) and Assumption 2.
Similarly by (5) and Assumption 2,
Gno =S G * .
>roperty A, F N k- «£ F* =£ 0, Gsk ^ G* *£ 0, and repair is optimal in state (N, k) , any k. □
Sufficient conditions for F* ^ and G*$0 and consequently the optimality of a nontrivial optimal
cy are:
y a = 2n J (c J -L J ) + 2 a ' S^^-S 1 * 1 -
j=o
j=0
j=o
0,
re
K a = -M+ J Uj (Cj -Lj)+a Max f YijDl =s 0,
Min
'7
0, (l-a)2«'2 Pfclf-Cj I ./>»*
i = (=0
*]
Lo — Fj + a V (po; — /J,/ ) C;
7 < *'
:amining these conditions, if
2 IIKQ-LjOssO,
j =
398 D. ROSENFIELD
the conditions are automatically satisfied. If not, the other nonpositive terms are added until hopefull<
N
the sum becomes nonpositive. V Ylj(Cj — Lj) represents one-period cost differences inherent in F
and G , and other terms represent bounds on future cost differences. A proof that these conditions
are sufficient is presented in [14], Theorem 5.3.3 and Lemma 5.3.4.
3. THE AVERAGE-COST CASE
The results for the discounted case can be extended to give results for the average-cost case
THEOREM 2: Assuming that the limiting optimal action in state (N, k) is repair for the average
cost case, and, in particular, that
(7) lim sup F* <
oto - * 1 a^ao
and
(8) lim sup G* < 0,
oto—> 1 a3*a
where it is noted that F *andG * depend on the discount factor a, then a monotonic policy is optima
under the average-cost problem, and it is optimal to repair in state (/V, A;) any A:. Furthermore, all critica
numbers are finite.
The extension of the discounted case to the average-cost case is generally a direct result of Ros
[16], Theorems 1.1 to 1.3. Our proof extends these results since we also show the existence of finit
critical numbers when (7) and (8) hold.
PROOF: Note that
C(i, k) =£ £ p k ;.Cj + «C(0,0) ^C N + C(0,0),
j =
w
here Cn= max {Cj} — Cj evaluated at j = N, and hence
j
(9) C(i, k) - C(0,0) ^ C*
From Lemma 1, if Fj is nondecreasing,
Hence
p k o; 1 Fj=p' M J Fj)^p k /j-
O(0, k) = Min (f P k fh^P%L }
>0 j=0
MARKOVIAN DETERIORATION MODEL 399
londecreasing in k.
nee, by induction and (3),
C (0, k) = Min{ £ rtfj + aC"- 1 (0,0),
^P^j + M + a^p^C-HhO),
j=0 j=0
j^p*.Lj + aC"-H0,k+l)}
ondecreasing in A:.
:aking limits
C(0, At) is nondecreasing in k.
s
C(»,*)»C(0,*) 6y(4)
S>C(0,0) by (10).
ibining this with (9) we have
|C(i,A)-C(0,0)| ^Cv.
hypothesis of Theorem 1.3 in Ross [16] is hence true.
We now examine (7) and (8). Note that (7) and (8) imply for some ao for all ao s£ a < 1,
F < 0, G < 0.
* *
>y Theorem 1, monotonic policies are optimal for the infinite-horizon case for all 1 > a 3= ao. If
:an show that all critical numbers for all 1 > a ^ ao are bounded, then for a > ao there are a finite
ber of possible optimal policies for the discounted problem. Thus we can pick a subsequence,
► 1, such that one monotonic policy is optimal. Combining this with Theorem 1.3 of Ross [16],
see that this same policy is average-cost optimal. Furthermore, all of the critical numbers are
ided.
So we must show that critical numbers are uniformly bounded for 1 > a 3= ao some ao- By (7) and
II can pick ao such that
sup F j1c <0
a & au
400 D. ROSENFIELD
and
sup G^ < 0.
a 5 s an
If we can show that
Fk = sup Fok and G;, = sup Goa
a ~» a,, a 3= an
converge to sup F and sup G respectively, then we can find aAo such that Ga o <0 and &k sue
that Fa < and thus k*(0) (and thus k*(i) all i) would be bounded by max {k ,k} for all 1 > a ^ a
This would give us our result. By examining the expressions for Foa,Goa-,F„., andC^, the convergent
would hold if
(12) 2 I P%j ~ Uj | (Clj,0)-CiOM->
J = o
and
(13) C(0,A)-C„-*0
uniformly in a 2* «o- (12) holds because of relation (11). A proof of (13) is given in the Appendix. [
As in the discounted case, we have sufficient conditions on the costs to assure the limiting oj
timality of repair (that is, (7) and (8)).
These conditions are
(14) 7,<0andK,<0
where J and K have been defined for the discounted case and where we note that in computing Di* s
we set
oc N N
a -a) S ai S Poi Li = X n,z "-
j = o ( = o a = i j = o
Since
lim J „ = J \ and lim /C„ = A^ ,
a-* 1 a— » 1
it is clear that from the discussion of sufficient conditions in the discounted case that (14) impli
(7) and (8).
MARKOVIAN DETERIORATION MODEL 401
APPLICATIONS AND DISCUSSION
The results in sections 2 and 3 are useful in the general case for deteriorating systems in that the
imptions and conditions involving./ a and/C a can be tested to determine whether a monotonic policy
>timal under either of the two optimality criteria. There are, however, two subclasses of deteriorating
ems that are of particular interest for application.
The first class consists of all IFR and upper triangular (p,j = 0, j < i) matrices. Note that such
ices satisfy Assumption 4. It turns out in this case that the conditions J a =£ 0, k a «£ together
the conditions for trivial (perpetual no action) optimal policies are collectively exhaustive. That
. monotonic policy of some type, either trivial or nontrivial is always optimal. (Matrices of this
5 that satisfy an additional condition known as total positivity of order two are assumed in [15].)
upper triangular assumption is of practical interest in that it stipulates that the process cannot go
better (lower state) by itself, and this is often the case for a deteriorating machine or system.
The second class consists of 2 X 2 IFR matrices. For this class, a process is in either a good or a
state and can be characterized by the following transition matrix:
1 — a a
b 1-
re a is the probability of going from the good to the bad state and b is the probability of going from
oad to the good state. In addition to simplicity, the class is especially interesting for two reasons.
, Assumption 4 reduces to the single relation
a + b^l.
nd, we can completely characterize the optimal policy. It should be noted that this class is not
red by the assumptions of [15] unless 6 = 0. Ross [17] derived the optimal policy structure for
problem with minor differenced in convention, but only for 6=0. This paper consequently charac-
i:s optimal policy when b > 0, that is when the process can go back and forth from the good state
bad state.
The complete characterization of optimal policy when J a ^ and k a ^ (or J \ < and k\ <
le average-cost case) consists of repair in state (1, k) any k, or in state (0, k), k 3= k*(0), which
.'. monotonic part. Furthermore, the region (0, k), k < A*(0), can be broken up into, at most, three
Iional regions: a region of no action-optimal states for < k < k *, a region of inspection for k* =£ A-
and another region of no action for k* < k < k*{0) for some choice of k* =s k* *£ k*(0). The
V' is diagrammed in Figure 2. The result is summarized, along with more specific conditions in-
v\gj a and K a , in:
fHEOREM 3: Assume without loss of generality that L» = for the 2 X 2 case (N = 1 ) .
S a =Cob + Cia-aL 1 (l + (a+b)a)l(l-a + (a+b)a)
T a = -M+[C () b + C l a(l-d)-L l a + a 2 a 2 L i l(l-a + (a + b)a)]l(a + b).
402
K'(O)
K, "
K, "
D. ROSENFIELD
u
REPAIR
NO ACTION
INSPECTION
NO ACTION
NUMBER
OF TIME
PERIODS
SINCE
REAL
STATE
KNOWN
LAST REAL STATE KNOWN
WITH CERTAINTY
FIGURE 2. Example showing form of optimal policy for case of two real states.
Assume for the discounted criterion that S a =* and T a ^ 0, and for the average-cost criterion tha
S\ < and T\ < 0. Then under the respective criterion, when
a + 6=£ 1,
a monotonic policy is optimal, and furthermore, repair is optimal in state (1, k), all k, and the se
{ (0, k) } can be broken up into, at most, four regions: a no-action region, an inspection region, anothe
no-action region, and a repair region.
PROOF: By using the fact that
p i m = a(l-(l-a-b) i )l(a + b),
we see that by substitution,
S a = (a + b)J a andTa^ K a .
Hence, by Theorems 1 and 2, a monotonic policy is optimal and the region { (1, k) }, k 2= is a repair
region. It remains to show that the region { (0, k) } , =S k < k* (0) can be broken up into, at mos
three additional regions. For the discounted criterion over a finite horizon, let
J'u =
cost of choosing inspection-cost of choosing no action
when an optimal policy is subsequently followed.
From the recursive relationship (2),
MARKOVIAN DETERIORATION MODEL 403
JZ k = a^p^ i C»- l (j,0)-C»-H0,k+l) ) ) + M
j =
= M+ aMax f £ p^» [C"~ » (;, 0) - B» 7 ' ] 1 by (1).
m«[l,m(n-l)] l J = J
the function in brackets is either nonincreasing or nondecreasing in j, since j = or 1. Now if
s nondecreasing in j, then
I pZ; 1f j= i ^ i p;,F/ > 2 p« F « b y Lemma L
;=o j=o i = o i=o
s the function in braces is either nonincreasing or nondecreasing in k. The maximum is conse-
ntly unimodal with one minimum. Since J Q \. is unimodal with one minimum, lim J",, is unimodal
i one minimum. But this quantity is the cost of choosing inspection minus the cost of choosing no
on (and then behaving optimally) for the discounted criterion over an infinite horizon. The assertion
i follows for the discounted case, since the optimal action in (0, k), 0=S k < A:*(0), depends on the
of lim J ' k . The assertion for the average-cost criterion follows by the proof of Theorem 2. □
Two comments are relevant here. Unless Co and C\ are somewhat larger than L\, the cost condi-
3 will often be satisfied. The existence of "four regions" for the set {(0, k) :A 3=0} is a typical
it for these types of problems, and this is the subject of [15] and an important facet of [17]. As
onstrated by examples in [17] and [20], four regions can certainly occur. The important result is
, at most, four regions can occur.
APPENDIX
30F THAT C(0, k)^C^ UNIFORMLY IN a
For any given l>a^a , for k big enough, since F* < and G* < 0,
C(0, *) = 2p*Cj + aC(0, 0) *£ 2p£.L, + aC(0, A + l).
j=0 j=0
3 for a > ao
C* = Urn C(0, k) = 2 njC; + aC(0, 0) =s ]? RjLj + aC* from above.
J' = j=0
sing this relationship recursively,
C^= ^UjCj + aC (0,0)
= Min { £ YljCj + aC (0, 0) , ]T UjLj + a £ UjCj
j=0 j=0 j =
+ a*C(0, 0),. . ., (l-a"^)l(l-a) £ ILLj + a"* 1 ^UjCj
j = j =
+ a» + 2 C (0,0), . .. .].
404
D. ROSENFIELD
Now let k 2= A"o, a 3 s ato- Recall that k is defined in the proof of Theorem 2 and that Ga < for a 2= a c
Since inspection cannot be optimal in (0, k) by (12),
C (0, A-) = Mini £ p*.C; + aC (0,0), £ p*.L; + C(0, A + 1)
*-j = o j =
-Min{f;p*Cj + oC(O f O),2;p*L r fa2;p*+>Cj
n N \
+ a 2 C (0, 0) , . . . , £ a* ^ pj + j Lj + £ a" + «p*+ » + l Cj
1 = j = j =
+ a» + 2 C(0, 0), . . .1
by recursively substituting for C(0, Ar+1), C(0, A: + 2) and so on.
In the above minimization, n denotes how many periods to wait before repair. Since F# < an
G# < 0, the minimization will occur for some n for each a.
Since
Min (Ai) -Min (B t ) | =£ sup { \Ai~Bt\}
when the minima are attained,
(15) |C(0, A)-CJ =£sup
j = o
% {p*j-nj)Cj , ^(p^-Uj)l
j = o
j = o
+«2K; i -n J )c, e^ek; 1 -^) 1
(=0 j=0
+a'« +i 2 (p*/ m+1 -n J )c J , . . .]
E{ ipJ +4 -njLj+.u P j
. = L j = m *°j =
*OJ J
Cj.
Recalling Assumption 3 and the ergodic theorem (e.g., Fisz [6]),
|pJi + '-rij| «C-(l-8) (fc + i)/r where C is a constant.
By substituting into (15),
| C (0, k) - C* | as constant X ( 1 - 8) k -> uniformly.
ACKNOWLEDGEMENT
The author would like to thank G. J. Lieberman of Stanford University for supervising this researc
while the author was a graduate student and D. A. Butler and others at Stanford University and tl
State University of New York at Stony Brook for helping review the manuscript.
MARKOVIAN DETERIORATION MODEL 405
BIBLIOGRAPHY
Blackwell, D., "Discounted Dynamic Programming," Annals of Mathematical Statistics 36,
226-235 (1965).
Derman, C, "On Optimal Replacement Rules When Changes of State are Markovian," Mathe-
matical Optimization Techniques, R. Bellman, ed. (University of California Press, Berkeley,
1963).
Derman, C, "Markovian Sequential Control Processes — Denumerable State Space," Journal of
Mathematical Analysis and Applications 10, 295-302 (1965).
Derman, C, Finite State Markovian Decision Processes (Academic Press, New York, 1970).
Emmons, H. and J. Poimboeuf, "Optimal Markovian Replacement with Inspection Costs,"
Technical Report No. 131, Department of Operations Research, Cornell University, Ithaca,
N.Y. (1971).
Fisz, M., Probability Theory and Mathematical Statistics (John Wiley and Sons, Inc., New York,
1963).
Girshick, M. A. and H. Rubin, "A Bayes Approach to a Quality Control Model," Annals of Mathe-
matical Statistics 23, 114-125 (1952).
Karlin, S., Total Positivity (Stanford University Press, Stanford, Calif., 1968), Vol. 1.
Karlin, S.,A First Course in Stochastic Processes (Academic Press, New York, 1969).
Klein, M., "Inspection-Maintenance-Replacement Schedules under Markovian Deterioration,"
Management Science 9, 25-32 (1962).
Kolesar, P., "Minimum Cost Replacement Under Markovian Deterioration," Management Science
12, 694-706 (1966).
Lehmann, E., Testing Statistical Hypotheses (John Wiley and Sons, Inc., New York, 1960).
Ray, D. K., "Replacement Rules for Partially Observable Equipment," Technical Report No. 37,
Operations Research Center, M.I.T., Cambridge, Mass. (1967).
Rosenfield, D. B., "Deteriorating Markov Processes Under Uncertainty," Technical Report No.
162, Department of Operations Research and Department of Statistics, Stanford University,
Stanford, Calif. (1974).
Rosenfield, D. B., "Markovian Deterioration with Uncertain Information," Operations Research
24, 141-155 (Jan.-Feb. 1976).
iRoss, S. M., "Non-Discounted Denumberable Markovian Decision Models," Annals of Mathe-
matical Statistics 39, 412-423 (1968).
(iloss, S. M., "Quality Control Under Markovian Deterioration," Management Science 17, 694-706
(1971).
Smallwood, R. D. and E. J. Sondik, "The Optimal Control of Partially Observable Markov
Processes Over a Finite Horizon," Operations Research 27, 1071-1088 (1973).
'Tafeen, S., "A Markov Decision Process with Costly Inspection," Technical Report No. 107,
Department of Operations Research and Department of Statistics, Stanford University, Stanford,
. Calif. (1968).
Baylor, H. M., "Markovian Sequential Replacement Processes," Annals of Mathematical
Statistics 36, 1677-1694 (1965).
AN ALGORITHM FOR THE
SEGREGATED STORAGE PROBLEM
A. W. Neebe
University of North Carolina at Chapel Hill
and
M. R. Rao
University of Rochester
Rochester, New York
ABSTRACT
The segregated storage problem involves the optimal distribution of products among
compartments with the restriction that only one product may be stored in each compart-
ment. The storage capacity of each compartment, the storage demand for each product,
and the linear cost of storing one unit of a product in a given compartment are specified.
The problem is reformulated as a large set-packing problem, and a column generation
scheme is devised to solve the associated linear programming problem. In case of fractional
solutions, a branch and bound procedure is utilized. Computational results are presented.
PRODUCTION
Consider a set of products indexed by /= {1, 2, . . . m) , each with known storage demand a, > 0,
I set of compartments indexed by J = {1, 2, . . . , n}, each with known storage capacity bj > 0.
segregated Storage Problem (SSP) is to minimize total storage costs subject to the conditions that
oducts be stored, and that at most one product be stored in any compartment. One example
SSP is the silo problem, where a number of different grains are to be stored in the compartments
>ilo. Clearly, no compartment may simultaneously contain more than one variety of grain. Ex-
tt of the SSP are given by Eilon and Mallya [2], Shlifer and Naor [9], White [10], and White and
is [11].
ince all demand must be met, it is assumed that external storage space (an (n + 1 ) s ' compartment)
ilable at a premium. Defining J' —J U (n + 1), note that more than one product may be stored
tally. Let c-,j, iel,jej', be the per unit storage cost of product i in compartment j. LetXjj, id,
be the number of units of product i to be stored in compartment j.
hen the SSP is to
minimize
Zj Z, c 'J Xi i , >
it l jel'
t to
^ x\j—a\ for all iel
JtJ'
407
408 A - w - NEEBE AND M. R. RAO
(3) ^Xij^bj for all ./e J
ni
it/
(4) Xij . xuj — for all iel, kel, i ¥■ k and aWjeJ
(5) x u ^0 for all iel, jej'.
Constraints (2) insure that all products are stored. Constraints (3) insure that storage capacity is
exceeded. Constraints (4) guarantee that at most one product is stored in a compartment. Observ
that since all of the m products can be allocated to external storage, the SSP always contains at lea
one feasible solution.
Notice that (l)-(3), (5) is a transportation problem. White and Francis [11] take advantage of th
fact and propose a branch and bound algorithm which solves a transportation problem at every noc
of the branch and bound tree. For an alternate approach, they suggest an extreme point ranking proc'
dure. No computational results are given for either algorithm. However, a wealth of computation
experience on a branch and bound algorithm is provided by Dannenbring and Khumawala [1]. The 1
compare various combinations of node selection rules and branching decisions rules, and furthermor,
give computational results for a number of heuristic techniques. An implicit enumeration algorithm
which takes advantage of the graph theoretic properties of the SSP is developed by Evans [3]. A moi
erate amount of computational results is recorded.
In the following sections, we present a different formulation of the SSP, together with a propose
algorithm for its solution, an example, and computational results. The computational results indical
that our algorithm is substantially faster than Dannenbring anckKhumawala's algorithm.
2. AN ALTERNATE FORMULATION
Define an allocation of product i to be any allocation of all a, units of product i among the n +
compartments such that the capacity of no compartment is exceeded. Thus in general any allocatic
can be represented by the vector Wi= (wn, Wa, . . ., UH,n+i) where in addition
2 Wij = a;
je.l'
and
O^Wjj^bj for all jej'.
The cost of allocation w-, equals V CyU;y.
JeJ'
Any allocation Wj does not make efficient use of the compartments if the lesser-cost compartmen
used by wt are not filled to capacity. For this reason we can restrict our attention to the so-called ej
cient allocations. Intuitively, the efficient allocations dominate (in either a cost sense or a compartmei,
utilization sense) all nonefficient allocations. An efficient allocation is defined as follows: .etS(w)
SEGREGATED STORAGE ALCORITHM 409
;,j > 0} be the set of all compartments which are utilized by product i using allocation Wi. Then w,
id to be efficient if
"or all ye./', wtj = or bj, except for at most one j=j*
/here < wij * < bj * , and
or all ye./', if tfy > 0, then Wih = bh for all heS(wi),
uch that en, < dj.
he remainder of this paper we will let y k — (y k v y k 2 , . . ., yf n+1 ) denote the ^th efficient allocation
oduct i.
An efficient allocation y k can be obtained from (nonefficient) allocation w, in the following fashion:
the costs dj, jeS(wt), be ranked in nondecreasing order c,(d =£ c,( 2 ) =£ . . ., where ties may be
en arbitrarily. Clearly, the least cost way of allocating product i among the compartments in
) is to allocate y k (D = b(i) units to compartment (1), yf( 2 ) = ^(2) units to compartment (2), and so on,
all a, units are allocated. Assume compartment (/) is the last compartment to be allocated any
h
oduct i. Thus (/) is the smallest index h, such that ^ 6(j> Ss a,. Then compartment (/) is allocated
Pi
i-i
: a, — ^ b(j) units, while y k (j)=b(j) f° r all i < '> and y k {j) —0 for all j > I. The cost of y k equals
j=i
l-l r l-l
cf = 2 Ci(jybu)+ cm) a t - ^ b U)
ly, the cost of y k is no greater than the cost of u>,.
Vssume there are n, efficient allocations of product i, and let A^,= {1, 2, . . ., /i*}. Define decision
oles z k , iel, keNi, such that
k _ f 1 if efficient allocation y\ is used
'" otherwise
riif
lo if
Hie/, jej, and keN f define a vector g k f = (g* , g k 2 , . . . , gf n ), such that
1 if y* >
r& |(
l0 otherwise
the ones in the vector g k t identify the compartments that are assigned to product i using efficient
Bition y k . Observe that while the vector y k is of length (n+ 1), the corresponding vector g k is only
ligth n. This is the case because, in the SSP, it is irrelevant how many different products are allo-
i to the external compartment.
The new formulation of the SSP is to
I minimize Y Y cf z k ,
410
subject to
(8)
(9)
(10)
A. W. NEEBE AND M. R. RAO
2 X ggzf^l for all ./W
ie/ A-ttf,
^ z*=l for alii e/
At A',
zf = 0orl for alii e/, fcdVf.
The constraints (8) insure that no compartment is assigned more than one product, while the cor
straints (9) guarantee that every product is fully allocated. Observe that (7)-(10) is a set-packing problen
with the additional constraint (9) specifying that exactly one efficient allocation for every product i
used. If the binary constraints (10) are relaxed to
(ID
0,
a linear program (LP) is obtained. For the remainder of this paper we will refer to (7)-(9), (11) as th
LP version of the SSP. We will show how this LP may be solved using a column generation techniqu
in which only the "best" (with respect to cost) columns are generated.
3. SOLUTION OF THE LINEAR PROGRAM
Let B be any feasible basis to (8), (9), (11). The matrix B contains (n + m) rows. An addition;
"objective function" row is included to obtain the Simplex multipliers. Any column of B correspondin
to efficient allocation y k t is of the form (gf\, gf 2 i • • •■> #f n > e i)% where e, is the ith unit vector andrdi
notes transpose.
Any feasible solution to the SSP can be converted into an easily invertible, initial, feasible bas
Bo. Let (yi,j2,. • --.Jm) be the efficient allocations of the m products associated with any feasibi
solution to the SSP. A feasible solution may be obtained by executing any heuristic of the SSP. L
gi— (gn,gi2, • • • gin), as defined in (6), correspond toy,, iel. Let the ith column of Bo equal (g
et) T , iel. Let the (m +j )th column of Bo equal (e,-, <j>) T ,jeJ, corresponding to the y'th slack variable c
(8), where is a vector containing m zeroes. Thus if
G= [g\,g T 2 , ■ ■ .,g T m ] ,
then
(12)
Bo
G /I
/ oJ'
where / is the identity matrix of appropriate size, and is a matrix of all zeroes. The inverse is readil
seen to be
SEGREGATED STORAGE ALGORITHM
B,
»•-■[■; -*]•
The LP version of the SSP would be difficult to solve if it were necessary to generate all of the
jient allocations and thus all of the columns in the LP. Fortunately, a column generation procedure
i be employed to solve the LP. In this way, the LP columns are generated only as they are needed.
s procedure is used in conjunction with the Revised Simplex algorithm, which maintains a working
is inverse of order (m+n+l). See Lasdon [7] for a more complete discussion of column generation
iniques. For an example of column generation applied to the cutting-stock problem, see Gilmore
Gomory [5]. Our notion of an efficient allocation is similar to their concept of a cutting activity.
The vector (<t,tt) = (<n, . . . ,cr„,7Ti, . . .tt„,) = C«fi _1 is found in the objective function row of
current Revised Simplex tableau. The vector <j contains the Simplex multipliers corresponding
I), while the vector tt contains the multipliers corresponding to (9). The vector Cb gives the appro-
te costs of the columns in the current basis.
The basis corresponding to any LP tableau is optimal if all nonbasic variables have nonpositive
iced cost. Thus (7)-(9), (11) has been solved if, for all nonbasic columns (g%, a) 7 , del, keNi,
T ) (g$, eiY — df =S 0. Otherwise bring into the basis nonbasic column (g*, e p ) r , such that
(o-,7r)(g«, e p f-cl= max{(o-,7r) [g<( , e,) T - cf} > 0.
fceJVj
The column {g p , e p ) T represents the nonbasic column having the largest positive reduced cost, and
jtained by solving m subproblems (corresponding to the m products). An efficient dynamic program-
g algorithm for solving these subproblems is given in a following section. In solving the ith subprob-
, we either obtain the nonbasic column (corresponding to product i) having the largest positive re-
id cost, or else determine that no nonbasic column (corresponding to product i) has a positive re-
;d cost.
The subproblem for product i, iel, is to
maximize di — ^fij(tij) +7T,,
Hi'
2Ct tO
.h-i'
O^tij^bj for all j e./
0s£ti,n+i
e iorjej'
412 A. W. NEEBE AND M. R. RAO
rOif t i} =0
(18) ftjUij) = \vj-Cijtij if «ii >
and where cr, i+ i = 0.
Jf **= (j* t* t . . .,**«, n+i) is an optimal solution to (14)~( 17) with valued*, determine p such th
0* = max0*, with ties broken arbitrarily. If 0* =£ 0, the LP (7)-(9), (11) is optimal. If 0* >0,the colun
introduced into the basic is (g p , e p ) r where g p = (g p i, g p2 , . . ., g p „) is such that
riif«*>o
gpj=\
10 otherwise
One well-known result of mathematical programming is that the maximum of a convex functi<
subject to a set of linear inequalities occurs at an extreme point of the feasible region. (See Hadley [
for discussion of this result.) All of the subproblems possess convex objectives functions (14) if o"j=S
for all jej. If there exists some cr p > 0, the pth slack variable corresponding to (8) has a positive reduct
cost and may be introduced into the basis. The column introduced would be (e p , <f>) 7 . This process c
be repeated until the desired result is obtained. The lexicographic ordering rule can be used in t
event of degeneracy to avoid cycling, although we have experienced no problem with cycling in the mai
problems solved. (Refer to Garfinkel and Nemhauser [4] for a discussion of cycling and the lexicograph
ordering rule.)
An extreme point of the feasible region (15)— (17) is such that, at most, one < ttj < bj. Converse
every solution to (15)— (17) such that, at most, one 0< tjj < bj is an extreme point. Thus every efficie
allocation corresponding to product i is an extreme point of (15)— (17). However, (15)— (17) may ha'
extreme points which are nonefficient allocations. But an efficient allocation always maximizes (l l
subject to (15)— (17), since each nonefficient allocation has an objective function value no great
than the corresponding efficient allocation. Further, the reduced cost value for the kth efficient alloc
tion is given by (cr, tt) (gf, e,) 7 — cf. Thus maximizing 6, ; is equivalent to maximize I {a, it) (gf,e,) T_ cf
Observe that in solving (14)-(17) for all iel, we are solving for an efficient allocation t* havingtot
cost equal to
However, the corresponding linear programming column (g p , e p ) 7 has total reduced cost equal to
2 vjgpJ
TT,
This explains the "fixed charge" shape of the objective function (14).
The column generation algorithm to solve (7)-(9), (11) is given below.
ALGORITHM:
(Step 1) Starting with any feasible solution of the SSP, obtain the initial basis inverse So" 1 and t!
initial vector of Simplex multipliers (o% tt). Go to step 2.
SEGREGATED STORAGE ALGORITHM
413
(Step 2) If <Tj =£ Ofor all jej, go to step 3. Otherwise, letp be such that a p — max {(Tj). The column
</>) r should be introduced into the basis. Go to step 4.
(Step 3) Examine the subproblems (14)— (17) for all iel to try to find a column with positive reduced
. If no column with positive reduced cost exists, stop (the optimal solution to (7) -(9), (11) has been
ined). Otherwise, (g p , e p ) T , a generated column having positive reduced cost, should be introduced
the basis, and go to step 4.
(Step 4) Perform the pivot operation, obtaining a new basis inverse and a new vector of Simplex
ipliers. Drop the column which leaves the basis, and go to step 2.
EXAMPLE
Consider an example problem similar to that given by White and Francis [11] and also by Dannen-
y and Khumawala [1].
^\ i
compartment
demand
i \^
1
2 3
4
Cti
1
20
14 19
24
1
product 2
15
13 20
22
8
3
18
18 15
22
7
capacity bj
3
7 4
16
The only difference from their example is that we have expressed all demands a, and capacities
terms of multiples of 10 units. The fourth compartment represents external storage. The efficient
ations for Drodurt i, iel, were generated by considering all possible subsets of J' having total
icity greater than, or equal to, the demand a,. For each of these subsets, an efficient allocation
obtained by utilizing only the least costly of the selected compartments. This is done only for illus-
ve purposes, since the proposed algorithm generates columns only as they are needed. The 18
'rent efficient allocations, along with their respective costs, are enumerated columnwise in Table
Table 1.
(y?)':
Product i =
= 1
Product
= 2
Product
= 3
k
k
k
1 2 3
4
1
2
3 4
5
6
7
1
2
3 4
5
6
7
1
3
1
3
3
3
3
1
7 7
7
7
3
4
1
4
1
4
4 4
4
1
8
4
1
5
1
7
3
4
20 14 19
24
176
168
113 111
155
106
147
154
126
126 114
142
114
126
414
A. W. NEEBE AND M. R. RAO
Assume that a solution to the SSP is obtained using some heuristic. In this solution the 1 unit
product 1 is allocated to external storage, while 1 and 7 units of product 2 are allocated to compartmer
1 and 2, respectively. Finally, 4 units of product 3 are allocated to compartment 3, and the remaini
3 units are allocated to external storage. This solution corresponds to efficient allocations y\,y\, and)
and has a total cost of 256. The initial basis corresponding to (12) is
fio
—
1
1
1
1
1
1
1
1
1
_p
1
q.
Three subproblems of the form (14)— (17) must now be solved to determine that column, if ar
having largest positive reduced cost. For subproblem 1 (product 1), t * = y\ solves (14)— (17), and h
reduced cost equal to 10. For subproblem 2, t* =0, as no column has positive reduced cost. For si
problem 3, £3* = y\ solves (14)— (17), and has reduced cost equal to 12. Hence the column introduc
into the basis Bo is (#3*, e^Y where g* corresponds to t*.
After four additional columns are generated, and hence after four additional pivot operatioi
the optimal linear programming solution is obtained: z\ = z\ = z\ = 1 having an objective functi
value of 251. Since the solution is all-integer, this indicates that the optimal procedure is to utili
efficient allocations y\, y\, and u %.
5. SOLUTION OF THE SUBPROBLEMS
As stated, the subproblems (14)— (17) can be solved using dynamic programming. However,
is possible to devise a particularly efficient dynamic programming recursion by taking advantage
the fact that some efficient allocation solves (14)— (17). Specifically, we take advantage of the fact tl
for any efficient allocation y\ , if < y k Vj * < bj*, then y£ = for all j such that dj > c t j*.
Consider the following recursion for any subproblem (product) i. Let the costs Cjj,jeJ', be rank
in nondecreasing order c,(o =S c,( 2 ) =£ . - . =s c,,(„ + i). In general, Fj(k) is the maximum (partial) 1
duced cost of being in state k with stages j+1, ;+2, . . ., n, n+1 yet to be considered. The sta
variable k identifies the number of units of product i that have been allocated in stages j, j— 1, . • •■
The stages, of course, correspond to the compartments. The calculations start at stage 1 and procei
to stage n+1. The maximum reduced cost of all efficient allocations is given by F„ + \(cn).
SEGREGATED STORAGE ALGORITHM 415
culations for stage j = 1
F,(0) = in, Ft(k) = - * for k = 1,2 a/.
If 6(0 < at, let Fi(6(d) = era) — a,(i) ' 6<n + ""«•
Otherwise, let Fi(a,) = o-(n — Cj,(i) • a, + 7r,.
culations for stage j — 2,3, . . . , n + 1
Fj(k) = F j - 1 (k) for k = 0, 1 a,.
If 6(j) 5= a ; , let / = and go to step 2.
Otherwise go to step 1.
PD
Fj(k + b(j)) = max [Fj(k + b(j));
Fj-i(k)+ <T U ) - Ci,(j)-b(j)]
for A; = 0, 1, 2, . . . , a, — b<j) — 1.
/= a, — 6(j) and go to step 2.
p2)
Fj(ai)= max [^-(a,-);
max (F/-i(A)+ o- ) — Ci,o)'(Oi — A))].
/sAssaj-l
(End of calculations for stage j).
A brief explanation of the recursion is helpful. If at stagey we are in state k < a,-, then ti,(j) can
equal or b(j). These calculations are carried out in step 1. On the other hand, if at stagey we
n state k — a,, then ti,(j) can assume values between and b{j) inclusive. These calculations are
ed out in step 2. If F n + i(ai) > 0, an efficient allocation with positive reduced cost has been found.
Instead of solving (14)— (17) exactly, a good heuristic solution can be easily obtained in the following
mer. For any product i, rank the compartments j in nonincreasing order [o-(n — Ci(i)6(i)]/6(i) >
— c,(2)]/6(2) ^ • • ■ • Ties may be broken arbitrarily. Then allocate 6o> units to compartment
'(2) units to compartment (2), and so on until all demand a, is satisfied. The last compartment to
ssigned, say compartment (/), will be allocated an amount less than, or equal to, b(i) units. Ob-
'■ i that if compartment (/) is allocated exactly b(i) units, then the heuristic optimally solves (14)— (17).
This is easily seen. Suppose the objective function (14) is replaced by
i maximize Vdyty + iri
jeJ'
416
A. W. NEEBE AND M. R. RAO
where dij= (ctj — Cijbj)lbj. Maximizing (14a) subject to ( 15) — (17) gives an upper bound for max 6
Clearly the heuristic procedure solves (14a), (15)- (17).
Thus 7T; + V ((T(j) — Ci(j)) 6(j) provides an upper bound for max 6\. If compartment (/) is allocate!
j=i
exactly b(n units, the heuristic obtains a feasible solution with an objective value equal to the bound
Our column generation heuristic is an example of a "greedy" algorithm. See Magazine et al. [8] for ai
example of another greedy algorithm.
From a computational point of view, it may be better to use the heuristic to generate columns
reserving the dynamic programming algorithm for cases in which no column with positive reducei
cost can be found. The greater number of pivot operations that in general will be required may b<
offset by the faster column generating procedure.
6. EXAMPLE Continued.
Consider the SSP of the previous example. After the first pivot operation, the basis inverse is
B~ 1 =
(I
1
10
0-1001
0-1
10 0-10
1 0-1
The basic variables correspond to y\ , yf, y%, s\, y% and s 3 (in that order), where S\ and s 3 are slack var
ables. Thus (a, tt) = C h B- 1 = (24, 106, 126, 0, 114, 0)fl"' = (0, -12, 0, 24, 118, 126). The dynami
programming recursion corresponding to product i = 3 is as follows. First, the costs c a j,j— 1, 2, 3,4ar
ranked in nondecreasing order so that Cz, (i) = C3, 3= 15, c 3 ,(2) = C3, 1 = 18, c :! , (3) = c 3 , 2 = 18 and Cj,(4)=C3,
= 22.
The final values for Fj(k) for all states k — 0, . . .,7 and for all stages j= 1, . . .,4 are given i
Table 2.
Table 2
\k
J \
1
2
3
4
5
6
7
1
126
— 00
— 00
— OO
66
— oc
— X
— 00
2
126
— 00
— 00
72
66
— 00
— OO
12
3
126
— 00
— OO
72
66
— 00
— 00
12
4
126
— 00
— OO
72
66
— 00
— 00
12
SEGREGATED STORAGE ALGORITHM 417
ce F.j(7) = 12 > 0, we have found a column with a positive reduced cost. Working backwards
>ugh the recursion, the corresponding optimal solution is t* 4 =0, t* .,= 0, t* =3, and t* 3 —4>,
ch corresponds to efficient allocation yf r
Now using the heuristic technique,
[o-i-c 3 ,i6i]/6,= [0-18-3]/3=-18
[<T2-c 3 , 2 b 2 ]lb 2 = [-12-18 -7]/7=-19.7
[o- s -c s , 3 b s ]lb s = [0-15 • 4]/4 = -15
[o-4- 03,464]/^-= [0-22 • 16]/16 = -22.
r units of product 3 are assigned to compartment 3 and the remaining 3 units to compartment 1.
total reduced cost of this efficient allocation is —60 — 54 + 7^ = — 114+ 126= 12. Observe that
is the same efficient allocation generated using the dynamic programming algorithm. Of course,
is not always the case.
RESOLVING FRACTIONAL TERMINATION
If the LP version of the SSP terminates fractional, then at least one compartment qej is assigned
nore than one product. Let pel be any of these products. One possible way of obtaining an integer
ition is to utilize a branch and bound scheme starting with x pq . Two LP subproblems of the form
(9), (11) are created. In the first subproblem, compartment q is prohibited from being assigned to
duct p. This may be accomplished by setting c, )q = R, where R is a sufficiently large positive cost,
effect, this eliminates all efficient allocations y£, keN p , having y^ g >0. In the second subproblem,
ipartment q is not assigned to any product except possibly p. This may be accomplished by setting
-R for all iel except i = p. Observe that this is not a strict partition of the feasible region, as it is
idble for compartment q not to be assigned to product p in both subproblems. However, a strict
ition may be obtained if in the second subproblem we eliminate the slack variable associated with
partment q.
The two LP subproblems can be solved by the column generation algorithm outlined in the pre-
|s sections. Hopefully both LP subproblems will terminate in all-integer solutions. However, if
sr terminates in a fractional solution that has an objective value less than the objective value of
> best currently available integer solution, then additional LP subproblems must be created and
ed. The branch and bound procedure continues until a minimum cost, all-integer solution is
>ined.
Before solving any LP subproblems, it makes sense to check whether the fractional solution can
jsed to easily obtain an optimal integer solution. The following procedure, which takes advantage
e fact that there usually exists at least one alternate optimal solution which is all-integer, was found
e very effective. Let {xf} be any (fractional) LP solution with objective value 2, and let{y*y}be the
Responding set of efficient allocations. Consider in turn all x'i which are equal to 1. (Every LP frac-
)il solution we have encountered so far has contained a number of values equal to 1.) Each such
418 A. W. NEEBE AND M. R. RAO
if identifies a set of compartments S(yf) which, except possibly for the external storage compartmen
are assigned only to product i. Remembering that efficient allocation yf is to be used in storing produ<
i, eliminate product i and all internal compartments in S(yf) from the problem. Proceeding in thi
fashion, a SSP of reduced size is obtained. The same heuristic used to obtain the initial feasible soli
tion to the SSP may be applied to this reduced problem. An optimal solution to the original SSP ha
been obtained if i equals the heuristic cost of the reduced problem plus the cost of the previously estal.
lished efficient allocations.
8. COMPUTATIONAL RESULTS
The proposed SSP algorithm was written in double precision FORTRAN IV, compiled under leve
H, and run on an IBM 370/165. The "demand allocation heuristic" suggested by Dannenbring am
Khumawala [1] was used to obtain an initial basis inverse. Throughout the calculations the invers*
was stored as an adjoint matrix and a determinant, thus eliminating roundoff errors in the calculations
At every LP iteration the column generation heuristic was employed to attempt to find a column witl
a positive reduced cost. Only when it failed was the dynamic programming algorithm used.
If the LP version of the SSP terminated fractional, the procedure outlined in the previous sectio
was used to try to easily obtain an alternate optimal integer solution. In this case, the demand allocatio
heuristic was the solution heuristic applied to the reduced SSP. If the procedure failed to obtain a
optimal integer solution, the appropriate modified-cost data was generated, and the necessary LI
branch and bound subproblems were solved from scratch using our SSP algorithm. Because of th
relatively few number of LP subproblems that were required in the computations, this technique i
probably more efficient than the programming of a more elaborate branch and bound procedure.
A total of 38 data sets were randomly generated, each data set being composed of a number c
problems of identical size. The first 9 data sets are based on the data sets used by Dannenbring an*
Khumawala {1]. The only difference from their data is that we have expressed all demands a,and capac
ities bj in terms of multiples of 10. Since their values were in whole multiples of 10, no essential change
have been made. However, the effectiveness of our dynamic programming column generation procedur
can be increased by using smaller values of a, and bj. The next 9 data sets were generated in a simila
fashion using the same data generator. The parameters of the uniform distributions used in generatin
the data are as follows. Let U[p,q] denote the discrete uniform distribution between p and q, inclusive
Then, for the first 18 data sets (called Type 1 data sets), the internal storage costs c,j were drawn fron
f/[10,19], the external storage costs c,-, „+i were drawn from f/[20,24], and the capacities a, and de
mands bj were all drawn from f/[l,9].
The remaining 20 data sets were constructed to test the effectiveness of our SSP algorithm ii
solving problems in which the data had little variation. For all of these problems, the internal storagi
costs were drawn from £/[13,15], while the external storage costs were all set equal to 20. For Type!
data sets, the capacities were drawn from f/[l,9], and the demands from £/[5,6]. For Type 3 data sets
the capacities were drawn from £/[5,6], and the demands from C/ [1,9].
Computational results are given in Tables 3 to 5. For all data sets we indicate the problem size
the number of problems, and the number for which an optimal integer solution was readily available
This includes problems which terminated all-integer, and problems which terminated fractional bu
from which an optimal integer solution was easily obtained. We indicate the average and maximun
SEGREGATED STORAGE ALGORITHM
419
iber of pivot operations, and the average and maximum c.p.u. time in seconds required to solve
primary LP. These figures are exclusive of input/output, and exclusive of time needed for solving
LP subproblems. The next statistic gives the total number of LP subproblems which had to be
ed to determine optimal integer solutions for all problems in the data set. The final statistic, when
lable, is the average solution time (also in c.p.u. seconds on the IBM 370/165) required by the
ich and bound algorithm of Dannenbring and Khumawala [1].
In summary, all of the 251 problems attempted were solved. Type 2 and Type 3 problems were
:rally easier to solve than Type 1 problems, both in terms of the effort required to solve the primary
and in the number of LP subproblems that had to be solved. Actually, Type 2 and Type 3 problems
a greater tendency to terminate fractional. However, in almost all cases an optimal integer solu-
could be derived easily from the fractional solution.
Table 3. Type 1 Data Sets
Data
set
m
n + \
Total
problems
Total
integer
Aver-
age
pivots
Maxi-
mum
pivots
Aver-
age
time
Maxi-
mum
time
Total LP
subproblems
D & K
average
time
1
10
10
10
All
26
35
0.17
0.13
—
2
15
5
10
All
14
25
0.08
0.12
—
0.10
3
20
5
10
All
13
17
0.09
0.13
—
0.20
4
25
5
10
All
14
19
0.12
0.16
—
0.20
5
5
15
10
All
1
5
0.02
0.05
—
0.60
6
5
20
10
All
2
5
0.03
0.05
—
0.90
7
5
25
10
All
1
5
0.03
0.06
—
0.70
8
12
12
10
8
53
105
0.39
0.72
6
6.80*
9
15
15
10
9
77
117
0.82
1.24
6
15.20t
10
20
20
10
All
130
238
2.37
3.86
—
11
25
25
10
All
224
344
5.96
9.50
—
12
20
40
10
7
27
91
1.04
3.32
4
13
40
20
10
All
164
231
5.43
6.87
—
14
30
30
10
7
334
510
12.23
18.26
4
15
40
40
5
4
633
867
39.97
54.99
1
16
30
50
5
4
72
107
3.57
5.62
1
17
50
30
5
All
371
46
20.63
24.62
—
18
50
50
2
992
1124
94.98
105.95
6
*Average time for 8 or 10 problems solved.
t Average time for 3 or 10 problems solved.
Table 4. Type 2 Data Sets
Data
Total
Total
Aver-
Maxi-
Aver-
Maxi-
Total LP
set
m
n+1
problems
integer
age
pivots
mum
pivots
age
time
mum
time
subproblems
19
15
15
5
4
68
73
0.89
1.06
1
20
20
20
5
4
85
98
1.76
1.89
1
21
25
25
5
All
130
167
3.94
4.80
—
22
20
40
5
All
0.20
0.21
—
23
40
20
5
All
88
111
3.17
4.29
—
24
30
30
5
All
185
211
7.77
9.33
—
25
40
40
5
All
353
453
27.02
35.08
—
26
30
50
5
All
0.41
0.42
—
27
50
30
5
All
180
205
11.38
13.23
—
28
50
50
2
All
472
576
51.42
60.79
—
420
A. W. NEEBE AND M. R. RAO
Table 5. Type 3 Data Sets
Data
Total
Total
Aver-
Maxi-
Aver-
Maxi-
Total LP
set
m
n+ 1
problems
integer
age
pivots
mum
pivots
age
time
mum
time
subproblems
29
15
15
5
All
39
59
0.44
0.72
—
30
20
20
5
All
76
no
1.46
2.40
—
31
25
25
5
All
79
101
2.13
2.78
—
32
20
40
5
All
0.21
0.22
—
33
40
20
5
All
120
187
3.86
6.29
—
34
30
30
5
4
135
221
4.95
8.69
1
35
40
40
5
4
145
223
8.71
13.57
1
36
30
50
5
All
8
38
0.73
2.11
—
37
50
30
5
All
152
179
8.05
10.06
—
38
50
50
2
302
388
27.29
34.78
2
9. ACKNOWLEDGEMENT
We wish to thank D. S. Rubin for suggesting the column generation heuristic and for suggesting
the procedure for storing the basis inverse, D. S. Dannenbring and B. M. Khumawala for the use o:
their data and computational results, and an anonymous referee for proposing a number of valuablf
revisions.
REFERENCES
[1] Dannenbring, D. G. and B. M. Khumawala, "An Investigation of Branch and Bound Methods foi
Solving the Segregated Storage Problem," AIIE Transactions, 5, 265-274 (1973).
[2] Eilon, S. and R. V. Mallya, "A Note on the Optimal Division of a Container into Two Compart
ments," International J. Production Res. 5, 1963-1969 (1966).
[3] Evans, J. R., "An Implicit Enumeration Algorithm for the Segregated Storage Problem," Working
Paper, School of Industrial and Systems Engineering, Georgia Institute of Technology, (1974).'
[4] Garfinkel, R. S. and G. L. Nemhauser, Integer Programming (John Wiley, 1972).
[5] Gilmore, P. C. and R. E. Gomory, "A Linear Programming Approach to the Cutting-Stock Probi
lem," Operations Research 9, 849-859 (1961).
[6] Hadley, G. F., Nonlinear and Dynamic Programming (Addison-Wesley, 1964).
[7] Lasdon, L. S., Optimization Theory for Large Systems (Macmillan, 1970).
[8] Magazine, M. J., G. L. Nemhauser, and L. E. Trotter, Jr., "When the Greedy Solution Solves?
Class of Knapsack Problems," Operations Research 23, 207-217 (1975).
[9] Shlifer, E. and P. Naor, "Elementary Theory of the Optimal Silo Storage Problem," Operations
Research Quarterly 12, 54 (1961).
[10] White, J. A., "Normative Models for Some Segregated Storage and Warehouse Sizing Problems,
unpublished Ph. D. Dissertation, Ohio State University (1970).
[11] White, J. A. and R. L. Francis, "Solving a Segregated Storage Problem Using Branch-and-Bounc
and Extreme Point Ranking Methods," AIIE Transactions 3, 37-44 (1971).
OPTIMAL FACILITY LOCATION UNDER
RANDOM DEMAND WITH GENERAL COST STRUCTURE
V. Balachandran
and
Suresh Jain
Graduate School of Management
Northwestern University
Evanston, Illinois
ABSTRACT
This paper investigates the problem of determining the optimal location of plants, and
their respective production and distribution levels, in order to meet demand at a finite number
of centers. The possible locations of plants are restricted to a finite set of sites, and the de-
mands are allowed to be random. The cost structure of operating a plant is dependent on
its location and is assumed to be a piecewise linear function of the production level, though
not necessarily concave or convex. The paper is organized in three parts. In the first part,
a branch and bound procedure for the general piecewise linear cost problem is presented,
assuming that the demand is known. In the second part, a solution procedure is presented
for the case when the demand is random, assuming a linear cost of production. Finally, in
the third part, a solution procedure is presented for the general problem utilizing the results
of the earlier parts. Certain extensions, such as capacity expansion or reduction at existing
plants, and geopolitical configuration constraints can be easily incorporated within this
framework.
INTRODUCTION
The facility location problem and related problems are of relevance in the long range planning of
m's operations. These problems involve the determination of the location of the facilities, their
ciated capacity, and the distribution of the product from these facilities to the different demand
ers. Different aspects of this problem have been investigated by a number of researchers under
•d assumptions [2, 10, 14, 15, 17]. In our paper we consider a generalized version of the above
lem. Specifically, we consider the case where the location of facilities and their sizes are to be
led upon in order to satisfy the demand from different centers. The demand at these different
a rs is assumed to be random, and the cost associated with the production at any facility is assumed
! a piecewise linear function though not necessarily convex or concave. In the next section we con-
• the problem formulation and its motivation.
*ODEL FORMULATION
A firm manufactures a product which is required at n different demand centers. The demand
KL, 2, . . ., n), at each center, is assumed to be a random variable whose marginal density f(bj) is
med to be known. The firm has the option of setting up facilities at different sites, i(i= 1, 2, . . ., m).
possible capacities of the facility at site i can be any nonnegative integer a,, from the ordered
,-, where A, \ — {a ir \ r = 1,2, . . .,m, and a ir < a, r+1 for r = 1,2, . . .,m, — 1}. The first element
421
422 V. BALACHANDRAN AND S. JAIN
of each of the sets A t (i = 1,2, . . .,m) is assumed to be and corresponds to the decision of not lc
eating a facility at site i, and the last element a, m , corresponds to the maximum possible productioi
at site i. The cost of producing y, units at site iis/Hy,) where/j(-) is a piecewise linear function definei
on the set of nonnegative integers. Specifically,
(0 ify;=a,, =
(1) fi(yt) = | K ir + virji if a ir < y,- «= a,, r +i for r = 1,2, . . .,nn-i.
L cc if yi > dimf
Kir may be considered as the fixed component of the cost associated with setting up a plant of maximur,
capacity a,, r +i, and v , r is the per unit variable cost. The cost of distributing Xij units from a facilit
at site i to demand center j is t^xy where t t j is a constant. These costs may be considered as the dis-
counted costs if a multiperiod planning horizon is considered. With the above notation the probler. 1
can be formulated:
(2) Minimize Z = E
2 /'<*> + 2 2'«*«
i = 1 i = 1 j = 1
subject to
(3)
jg *« = &/* lorjej,
(4) "
2 *ij — Ji ^ o-im t for iel,
i=i
(5) xtj s= 0, y x ^ 0,
where 7={1, 2, . . ., n}, /= {1, 2, . . ., m} and 6;* represents the realization of the random variable bj
The above formulation incorporates the problems posed by various authors. For instance, th<
capacitated plant location problem [10], the location-allocation problem [14], and the warehouse loca,
tion problem [2, 17], are all special cases of the problem posed above. Discussions of the recent re,
search work done in this area have been presented by White and Francis [15], Soland [14] and recenth
by Elshafei and Haley [6]. Our formulation is similar in some respects to that of Soland [14]. Solanc
considers the case where the demand is deterministic and the cost functions are concave. Since th(
demand is generally not known with certainty as discussed in [7, 16], the consideration of this problen
with probabilistic demand appears to be more realistic. White and Francis [15] consider probabilistic
demand for a different problem, viz., that of determining optimum warehouse sizes only. The problen
that they consider is different from ours, since they are not concerned with either the location, or thf
distribution aspects. Essentially their problem has no constraints of the type represented by Equations
(3) and (4). The consideration of random demands increases the complexity of the problem. To oui
knowledge, no computationally satisfactory solution of this entire problem exists in the publishec
literature.
FACILITY LOCATION UNDER RANDOM DEMAND
423
Further, the cost structure that is generally considered is either a fixed cost plus a variable cost
:h that the total cost of production is concave, or a general concave cost as in Soland [14]. In the real
rid, because of indivisibilities and economies/diseconomies of scale, the cost structure is often
ferent. Hadley and Whitin [9, Chapter 2, p. 62] discuss quantity discounts where the cost function,
>ugh piecewise linear, is neither concave nor convex. Rech and Barton [11] also consider nonconvex
cewise linear cost functions for solving the transportation and related network problems. Their
orithm, for deterministic problems, can be applied to the problem we treat in section 3 and is similar
the algorithm we develop there. Their algorithm is different, however, in that its underestimating
ictions are different, and it solves a sequence of network flow problems. By considering a sequence
transportation problems, we can use the operator theoretic approach to find the solutions without
olving. This use of transportation problems becomes even more important when the probabilistic
iblem is considered later.
For ease of exposition we present the solution procedure of this problem in three phases. Initially
consider the case where the demand is deterministic. We develop an algorithm treating the 6>*'s
3) as known constants. This algorithm essentially solves a deterministic facility location, capacity,
1 distribution problem with a general piecewise linear cost structure. A branch and bound procedure
erein subproblems are solved using operator theory [12, 13] (an extension of parametric programming
ere simultaneous changes in the parameters of a transportation problem are investigated) is pre-
ted in section 3 for the solution of this deterministic problem. In section 4 we develop an algorithm
.olve the probabilistic case when the cost is assumed to be a linear function. The algorithm for this
babilistic demand problem utilizes the operator theoretic approach and the Kuhn-Tucker conditions,
ally, in section 5, we integrate the algorithms in sections 3 and 4 to develop a solution procedure
the entire problem. We show that the branch and bound procedure of section 3 for the entire problem
alts in subproblems of the type considered in section 4. This three-phase approach provides flexi-
:y for a user to solve either the general problem or the special cases considered in sections 3 and 4.
THE DETERMINISTIC DEMAND MODEL
In this section, we assume that the bj*s given in Equation (3) are known constants and in order
m n
nsure a feasible solution to this deterministic problem we assume that V a, m| 5= V 6j*. Some of
i=i j = i
different cost structures which arise in reality, and are permissible in our formulation, are sketched
'igures la-lc.
ft(yt)
' 1
1
1
l
1
1
1
i
1
i
i
1
an
a i2
a a
an
URE la. Economies of scale both in fixed and variable cost.
FIGURE lb. Piecewise linear concave costs.
424
V. BALACHANDRAN AND S. JAIN
fi(Yi)
a i2
a i3
Figure lc. Diseconomies of scale in variable cost.
To solve this problem we apply the branch and bound procedure. We first approximate all tl
cost f unctions, /i(y,)» by their best linear underestimates.
DEFINITION 1: A linear underestimate of the function fi(yi) over the interval /?* is a line
function /o + Uji such that I
and
U + liji =* fi(yi) for all yteRi,
/o + liyw = fi(yio) where y io e/?, and y i0 =£ yi, V yteRi.
DEFINITION 2: The best linear underestimate Co + ctyt of the function /j(y,) over the interval)
is a linear underestimate such that if /o + Uy\ is any linear underestimate of the function /;(y,) ov
Ri then c* 2= /,.
As illustrated in Figure 2, OE is the best linear underestimate of the cost functions sketched in tl
interval [0, 03]. The best linear underestimates of the cost function in the intervals [0, 02] and [02, a
are OB and CD respectively.
D=E
°2 u 3
COST FUNCTION IS OABCD.
Figure 2. Illustration of the best linear underestimates.
FACILITY LOCATION UNDER RANDOM DEMAND
425
Substituting the best initial linear underestimates cj. + c!y,- for each of the function /,(y,) over the
rvals [an, aim,] we obtain the following transportation problem after removing the constant term,
ny, from the objective function (corresponding to node 1 in Figure 3). Note that the constant term
ero at node 1.
Current LB=L=Z, (A" 1 )
Current UB=U=ZlX l )
Find L and U and branch if necessary
on node k such that Z\{X k ) = L.
Note at this stage that nodes 1, 2
are closed and nodes 3, 4, 5 are open.
Figure 3. Branch and bound procedure.
426 V. BALACHANDRAN AND S. JAIN
(6)
Minimize Z\ — V V (c\ + ttj)xij
i=i j=i
subject to
(7)
^ x u = bj* for jej,
(8) ^ acy = flfmj for iel,
j = i
(9) xy 5= for iel and jej U {rc+1},
where, *;,n + i (i = 1,2, . . .,m) are the slack variables. Let us denote by J' the set./U{n+l}. 1!
the optimal solution to this approximate problem be X 1 = {*!.} and its optimal cost be Zi(X l ). I
general at node 5 we let X s and y? denote the optimal solution of the approximate transportati
problem, and the resultant production at plant i respectively. Note that yf= V acf..) If Z(X ] ) is t
value of the original objective function (2) for the solution vector^ 1 then we can easily prove the folio
ing result:
If X* is the optimal solution vector to the original problem represented by equations (2)- (5)
then the optimal value Z{X*) is bounded above byZ(Z') and bounded below by Z\{X l ).
If L and U denote the current lower and upper bounds, then after solving the first approxinu
problem corresponding to node 1 (Figure 3), we have
L = Z l (X i ),
and
£/ = Z(*').
After solving the first approximate problem, we partition the domain of definition of one of tl
functions /j(y,-) so as to obtain better linear underestimates in each of the different segments of tl
partition. A number of different rules could be used to determine the index i(iel) on which to par
tion. Further, there is also the option of determining the number of segments in the partition, whii
would equal the number of branches. In this paper we provide one rule for obtaining two branchc
This rule is similar to the one proposed by Rech and Barton [11]. We determine the first index k,k
such that
(10) MrD-ciyl-ilo^MyV-cM-iU, for ui,
and
(11) A<n)-«-/' >o,
FACILITY LOCATION UNDER RANDOM DEMAND 427
■re
two branches that we obtain correspond to
Jk ** a k ,t,
Yk > a k , t ,
re
a k , t < y k =s aft, <+ i.
Thus at each stage of the branching process we generate two additional subproblems which can
epresented by nodes in the branching tree. These nodes are numbered sequentially as the sub-
)lems are generated. These subproblems partition the domain of definition of f k (y k ). We substitute
best linear approximation of f k (y k ) in each of these partitions and add the relevant constraint from
set (13)— (14) to each of the subproblems. The resultant problems are capacitated transportation
>lems. In general, at a specific node 5, which is obtained as a branch from node p by partitioning
in index k{kel) satisfying inequalities similar to ( 10)— (1 1) the following problem results:
Minimize Z, = £ X (cf + *«)*« + K s
ect to
^ Xij = bj* for jej,
^ Xij— U\ for iel,
Xhi + i *£ "? for iel.
xij 2= for iel and jej' ,
e
(i) c? = cf, U\ = Uf, uf = uf for i # A ,
ii) and for i = k, if yoA+Tl J'- ' s tne best hnear underestimate in the relevant partitioned do-
main of y k at node 5, then
c k =y k ,K' = KP + y% k -y% k .
428 V. BALACHANDRAN AND S. JAIN
Further, if node s corresponds to the branch associated with an inequality similar to (13), v
have U k — aw, u s k = "£• If node 5 corresponds to the branch associated with an inequali
similar to ( 14) , then U k = Uj>, u k = Uj! - (a k , + 1 ) .
Note: Initially at node 1, we have K 1 = 0, £/j 1 = a, m ., «/ = a, m ., and the cj have been defined earlu
Thus the following changes occur in each of the branches:
(i) cost coefficients of all Xkj, j—l, 2, . . ., n differ by some constant amount-^ in one of t!
branches and k 2 in the other branch; and
(ii) capacity of plant k is changed in one of the branches, and the upper bound of the slack var
able (xk, n+i) corresponding to plant k is changed in the second one.
We may remark here that the new solutions, taking into consideration the above changes, can! 1
generated (without resolving) by the operator theoretic approach discussed in [12, 13].
In Figure 3, if A" 2 and X 3 are the optimal solutions to the approximate problems generated bytl'
branching process at nodes 2 and 3, then we can show that the current lower bound, L, and uppi
bound, U, satisfy the following (by an argument similar to our earlier result):
L = Minimum [Z,(Z 2 ), Z,(X 3 )] ^Z X (X X ),
£/ = Minimum [ZiX 1 ), Z(X 2 ), Z(X 3 )],
and
L^Z(X*) ^U.
REMARK: The strict inequality L > Z\ (X 1 ) generally holds because the best linear underestimate
in at least one of the partitions has to be strictly greater than the previous best linear underestimati
since (11) holds. In the case when there is an alternate optimal solution and the revised best line<
underestimate corresponding to the branch with this alternate optimal solution is unchanged, then th
strict inequality will not hold.
If L — Z\{X 2 ) we branch on the node corresponding to the solution X 2 to obtain nodes 4 and 5 i
shown in Figure 3. If X 4 and X 5 are the solutions at these nodes, then the new current lower and uppt
bounds are given by
L = Min [Z,(Z 4 ),Z,(Jf 5 ),Z,(Z 3 )],
and
U = Min [ZiX 1 ), Z(X 2 ),Z(X 3 ),Z(X 4 ), Z(X 4 )].
Thus the current lower bound, L, at any stage equals the minimum of the lower bounds at the ope
nodes (nodes from where there are no branches), whereas the current upper bound, U, is the minimui
of the upper bounds over all the nodes. The algorithm terminates when the current lower bound equal
the current upper bound at the same stage. This process terminates in a finite number of steps sine
the number of sites is finite and since, at each branch, we partition the domain of definition of the y,
into disjoint intervals. Further, since each of the/j(y,) is a piecewise linear function, we cannot hav
more than m, partition on the variable y, before we have the best linear underestimate equal the fun(
tion itself, and thereby not allowing further branching on the variable y;.
FACILITY LOCATION UNDER RANDOM DEMAND 429
We therefore have the following algorithm.
ALGORITHM I:
(Step 0) For each of the functions fi(yi) substitute c} + c)yj, the best linear underestimate of/i(y,)
an =s yi ^ am- Solve the resultant transportation problem, and create an open node corresponding
he solution X 1 . The current lower bound is Z\{X l ) and the current upper bound is Z{X X ).
(Step 1) If the current lower bound equals the current upper bound and occur at the same node
i terminate. The optimal solution corresponds to the solution at the node where these bounds are
al. Otherwise go to Step 2.
(Step 2) Determine the open node with the current lower bound and partition on the index k that
sfies Equations ( 1 1 ) — ( 13) . Close this node and generate two additional open nodes corresponding
Iquations (13) and (14). Solve the two resultant transportation problems using operator theory from
solution of the old open node. If any of the resultant transportation problems has no feasible solu-
then close that node and drop it from consideration for further branching. Let the upper bound
>ciated with such a node equal °°. Determine the current lower bound over all the open nodes, and
current upper bound over all the nodes. Go to Step 1.
REMARK: If at a branch (node) some Ja =£ aAi = (no plant at site k), then in the resultant trans-
ition problem the variables Xkj for jej ' can be set to zero, or alternatively, dropped.
rHE PROBABILISTIC MODEL
In this section we investigate the problem in Equations (6')~(9') with special emphasis on the case
n the demands bj, jej, are random. The demand is assumed to follow some known multivariate
"ibution, which allows interaction among the demands at the different centers. Let/(6 ; ) represent
marginal density function of bj. We therefore have the following problem (after removing the super-
)ts for ease of exposition).
Minimize ^ ^ c t x ij + ^?^t jjX tj + K
if I UJ it! jt.)
that
^Xij=bj iorjej,
iel
2j Xij-\-Xi t „+i = Ui forte/,
xtj 5= for iel and jej' ,
Xi, n +\ *£ "i for UI,
e each bj is assumed to be random with known marginal density (or mass) function/(fey). Though
a discrete random variable, for ease of theoretical exposition, we consider only the continuous
•ximation of bj.
The solution procedure presented in this section is based on the theory of "Two Stage Linear
amming Under Uncertainty" also known as "Stochastic Programming with Recourse", as pro-
430
V. BALACHANDRAN AND S. JAIN
posed by Dantzig and Madansky [5], and others, [4, 7, 8, 16]. In order to solve this problem we mak
the following assumptions, which are similar to the ones used by all the above mentioned authors.
(Al)
The marginal distribution f{bj) of each bj is known.
(A2)
This distribution f(bj) is independent of the choice of xy.
Charnes, Cooper and Thompson [4] have shown that the two-stage linear program is equivalei
to a constrained generalized median problem whose objective function has some absolute value term:
This objective function was shown to be equivalent to a mathematically tractable function by GarstL
[8]. In this section we essentially follow the approach and notation given by Garstka [8].
Since each bj is random, all the constraints given by Equation (17) need not be exactly met. ^
therefore assume that the firm experiences an opportunity cost of lost demand. This cost, pj 3* 0, f
assumed to be linear and is treated as the per unit penalty cost of not satisfying demand at center
Similarly, if we have more supply than the demand, then there may be a penalty due to holding, storing
and obsolescence. Let us assume that this per unit penalty, dj 5= 0, due to overproduction, is also linet
at center j. Following Charnes, Cooper and Thompson [4], the total penalty costs associated with dt
mand exceeding production is:
(21)
2# { b J~ 2 Xi J + ( fej_ 2 x
It can be verified easily that if the production exceeds demand, viz., if V Xij>bj, then the abov
i
penalty is zero. Similarly, the total penalty cost associated with excess production is
(22)
o 4i\ 2 Xi J ~ b J + ( 2 *w ~ b
L *- lei \ ill
Following the approach of two-stage linear programming under uncertainty [4, 5, 7], we consider ou
objective function to equal the expected value of the total production costs, distribution costs, and cost
due to under and over production. The problem (16)-(20) is therefore equivalent to
(23) Minimize £
2 2 c **«+2 X*u*«
ill . jiJ ill jiJ
* jeJ l lei X ill n
+
b J~^ x U
id
+
[bj - ^ x u
+ K
FACILITY LOCATION UNDER RANDOM DEMAND 431
ject to
) ^ Xij + %i, „+ i — Ui for i e / ,
) x^ ss for i e I and j e J ,
) Xi, n+ i^Ui forje/.
us denote V *tj by bj , and c, + f,j by C,j, for i e/ and ye./. It has been shown by Garstka [8] that
id
objective function given in (23) can be transformed into
) MinFi + F 2 + K,
re
> F i = 2 ^\{2C ij - Pj + d j )x ij ,
Ul j(J
jeJ Jb J<> X id '
re bjm is the median of the random variable bj whose marginal density (mass) function is f(bj).
It is seen that F\ is linear and it can be proved easily that F 2 is convex (see Garstka [8, page 11] ).
and (Mi are the dual variables corresponding to constraints (24) and (26) respectively, then by the
n-Tucker conditions the optimal solution X *— {x£}, /a*, and A* satisfy the following:
A * are unconstrained for i el
(x*^0 for iel
Cij-\pj + \dj-^ (pj + dj) {"""/(bjWbj-k^O foriel and jeJ,
1 l ft. j J»jo
£** = £/ (oriel,
x *n+i ^ "' f° r tc ^'
xfj*sQ for i el and je J',
xfj • {left hand side of (32)} = (oriel and j e J.
■ x* n+l (-tf-k*)=0 forie/,
432 V. BALACHANDRAN AND S.JAIN
(38) (tti-**« + i)M* = for iel.
These conditions given by (30)-(38) provide a basis for solving the stochastic capacitated transpc
tation problem. Based on these conditions, the following propositions PI, P2, and P3 can be prove
The proofs are similar to those provided by Garstka [8].
PL For any iel and j e J, if Cy > Pj, then acy = 0.
P2. For any j ej and all i el, if C (J 3= (pj — dj)/2, then ^ Xij =£ 6 ;m in the optimal solution. It may
iel
be noticed that if pj < d } , the above inequality trivially holds since dj 5s 0.
P3. For any iel and all je J, if Ctj + dj <0 then ^ x\j = U, .
In many situations the combined cost of production and transportation, C\j, is greater than half
the difference (pj — dj). Thus in the algorithm given below we assume that C-,j 3* {pj — dj)j2. Not
however, that if the per unit cost of underproduction, pj, is less than that of overproduction, dj, the
the above assumption is unnecessary. In order to solve (30)-(38) let us pose the new determinist
transportation problem given below:
(39) Minimize ^i = X ^2 ^ 2Cij ~ Pj+ dj ^ Xij
it I j€j
such that
(40) ^Xij=Ui {oriel
(41) 5>j=&/* for jej
it I
(42) xij 5= for i el andjej and
(43) Xi, n +i =£ u.i for i el
where bj* are the realizations of bj. If A, , /x, , and Vj are the dual variables corresponding to constrai'
sets (40), (43), and (41) respectively, then by the Kuhn-Tucker conditions the optimal primal and du
variables satisfy:
(44) kf, v* are unconstrained, and /a* =£ 0, for iel, j ej ,
(45) Clj -(p J + d l )l2-kt-vf*0, for iel, je J,
(46) ^x*j=Ui, forte/,
JeJ'
(47) 2*5=fc». for -' € - / '
FACILITY LOCATION UNDER RANDOM DEMAND 433
for i el,
l i,n + l ~~ •*«'
x*j 2= 0, for iel andjej',
x*j ■ (left hand side of (45))=0 for iel andjej,
<„ + i(-M*-**) = forte/,
(iU-xl n+1 )fif = forte/.
It is easy to observe the similarity of the Kuhn-Tucker conditions given in (44) -(52) to those of
Driginal problem. Comparing Equations (32) and (45), we see that a solution to (44)-(52) will satisfy
conditions of the original problem (30)-(38) if
vf=( Pj +dj) ! bjm f(bj)dbj.
Jl'iCl
Since, by assumption C u 3= (p, — dj)/2, by proposition P2 stated earlier
iel
therefore the problem reduces to finding a set of bj , jej, such that
V J = (pj + dj) J"'"' J\bj)dbj
J bj *
If such bj* exist, then an optimal solution to (16)-(20) will be obtained by solving (39)-(43) with
^♦replacing bj in the constraint set (17). An optimal solution can now be determined by the following
ithm, under the assumption that C,j 2= (pj — dj)j2, for every i and j.
ALGORITHM II:
(Step 0) Initialization: Let A- the number of iterations be 1. Find bj,„ the median of the random
r ble bj for V jej, and set the initial bj *= bj m iorjej in Equation (41). Find the optimal solution and
tial cost to the deterministic transportation 'problem (39)-(43) and find the dual variables A.,, p.,
' :/ and Vj for jej. (Though p, for i el are obtained they are not directly needed.) The duals A., and Vj
solved from the relation A, + vj = C'^ = C-,, —(pj — dj)f2 for the (i,j) in the optimal basis. For k—\
e basis set be denoted by B l — {(i,j) \x\j > 0} and A 1 = {k\ | the optimal values of the dual varia-
Itbr iel} and V 1 — {v) | the optimal value of the dual variables for ye./}.
(Step 1) Iteration Procedure: Find 6V ' from the following (newsboy type) relationship:
r
" j+dj) j"!'
■(Pj + dj) f(bj)dbj.
434 v BALACHANDRAN AND S. JAIN
(Step 2) If b k * + 1 = b k , for every jej, then an optimal solution to the stochastic transportation proble
(16)-(20) is found and hence STOP, otherwise go to Step 3.
(Step 3) Area Rim Operator Application: Let /8| + 1 — b k f + ' , (or jej and let a, = for iel. Followii
the algorithm given by Srinivasan and Thompson [12, page 215], apply the area operator 8R A with 8=
and the above computed a* and /3j to generate the new optimal solution for the (A:+l)th problem. L
the new dual vector be A A + 1 . Let k—k+l and go to Step (1). (Note that if V k+1 and V k are the san
then b k -f 1 = b k * so that the iteration can be stopped.)
The convergence proof of this algorithm is based on the results derived by Charnes and Coop
[3] and Charnes, Cooper, Thompson [4]. Garstka [8] has given a proof based on [3, 4] and since the pro
for our algorithm is similar to that given by Garstka [8], we only provide an outline of the proof. It c;
be shown that Fz(bj) is convex in bj so that Fi + F-z in (27) is also convex. Further, it can be shown th
b k .? 1 *£ 6**, and thus the b k „ are monotone and bounded by &'» = bj m - With the relationship betwet
b k * 1 and v k which is one to one, and due to monotonicity and convexity, convergence is established.
5. THE GENERAL MODEL AND EXTENSIONS
In this section we provide a solution procedure for the problem formulated in section 2 given 1
Equations (2)-(5). In section 3 we provided Algorithm I based on a branch and bound procedure,
solve the above problem with the 6/s being deterministic. At every branch in this algorithm, we a
faced with a deterministic transportation problem, where the right hand sides in the constraint s
(3) and (4) change from branch to branch and the cost coefficients in (6) also change. However, due
the operator theory [12, 13], the optimal solutions at each branch are obtained in a computational
efficient manner. Let us represent, for ease of exposition, the costs and the right hand sides of a bran<
5 as cf, Uf, and uf for iel. Now, to consider the problem (2)-(5) in its entirety, it is enough if we intr
duce the randomness in the bj's,jej. This leads us directly into section 4, where these appropria
costs and right hand sides replace the corresponding ones in Equations (16)-(20). Thus, Algorithm )
is directly applicable to this new stochastic transportation problem provided Assumptions Al-/
hold. It is to be noticed that the £,j, Pj and dj do not change. Assumptions Al and A2 are unrestricti
and can be expected to be true in most practical situations. We now present the following unified /
gorithm III to solve the original problem posed by Equations (2)-(6) by utilizing Algorithm I first ar
applying Algorithm II to each branch of Algorithm I.
ALGORITHM III:
(Step 0) For each of the functions fi(yi) substitute c] + c\yi, the best linear underestimate
fi(yt) for an «s y, «£ a,™;. Solve the resultant stochastic transportation problem (6)-(9) with the assum
tion that bf for jej in (7) is random, utilizing Algorithm II. Create an open node corresponding to th
optimal solution X 1 to the stochastic transportation problem. Denote the current lower bound I
Zi(Z') and from the objective function Z given in (23), get the upper bound Z(X l ).
(Step 1) Same as Step 1 of Algorithm I.
(Step 2) Determine that open node with the current lower bound and partition on the index
that satisfies Equations (10)— ( 13). Close this open node and generate two new open nodes as branche
corresponding to (13) and (14). Note that the new problem at each of these branches is a stochast,
transportation problem which can be solved using Algorithm II. Determine the current lower hour
over all the open nodes, and the current upper bound over all the nodes. Go to Step 1.
FACILITY LOCATION UNDER RANDOM DEMAND 435
REMARK: Since Algorithm III above is the unification of Algorithms I and II, the convergence
)ws due to the convergence of the earlier two algorithms.
Our algorithms facilitate easy consideration of certain extensions to the problem formulated in
tion 2 such as:
(1) Inclusion of constraints requiring mandatory operation of certain plants.
(2) Capacity expansion or reduction at existing plants.
(3) Geopolitical consideration requiring the operation of plants at mutually exclusive or mutually
dependent plant sites.
Extensions (1) and (2) follow from the fact that each a, m is arbitrary and each an can be made
trary, while extension (3) can be imposed when branching occurs.
4CKNOWLEDGEMENT
The authors wish to express their appreciation for the helpful comments of a referee-.
BIBLIOGRAPHY
Balachandran, V., "The Stochastic Generalized Transportation Problem — An Operator Theoretic
Approach," Management Science Research Report No. 311, GSIA, Carnegie-Mellon University,
Pittsburgh, Pa., (Feb. 1973).
Baumol, W. J. and P. Wolfe, "A Warehouse Location Problem," Operations Research 6, 252-263
(1958).
Charnes, A. and W. W. Cooper, "Systems Evaluation and Repricing Theorems," Management Sci.
9,33-49(1962).
Charnes, A., W. W. Cooper, and G. L. Thompson, "Constrained Generalized Medians and Hyper-
medians as Deterministic Equivalents for Two-Stage Linear Programs Under Uncertainty,"
Man. Sci. 72,83-112(1965).
Dantzig, G. B. and Albert Madansky, "On the Solution of Two-Stage Linear Programs Under Un-
certainty," Proceedings Fourth Berkeley Symposium on Mathematical Statistics and Probability,
1 , J. Neyman, ed., Berkeley (1961).
El Shafei, A. N. and K. B. Haley, "Facilities Location: Some Foundations, Methods of Solution,
Application and Computational Experience," Operations Research Report No. 91, North
Carolina State University, Raleigh (1974).
I'erguson, A. R. and G. B. Dantzig, "The Allocation of Aircraft to Routes — An Example of Linear
Programming Under Uncertain Demand," Management Science 3, 45-73 (1956).
I'iarstka, S. J., "Computation in Stochastic Programs with Recourse," (Ph. D. Dissertation, Manage-
ment Science Research Report No. 204, Graduate School of Industrial Administration, Carnegie-
Mellon University, Pittsburgh, Pa. (1970).
Hadley, G. and T. M. Whitin, Analysis of Inventory Systems (Prentice-Hall, Englewood Cliffs, N.J.,
1963).
I'lanne, Alan S., "Plant Location Under Economies of Scale — Decentralization and Computation,"
i Management Science 77, 213-235 (1964).
] vech, P. and L. G. Barton, "A Non-Convex Transportation Algorithm" in Applications of Mat he
1 matical Programming Techniques, E. M. L. Beale, (ed.), American Elsevier Publishing Co., New
York, 1970), pp. 250-260.
436 V. BALACHANDRAN AND S. JAIN
[12] Srinivasan, V. and G. L. Thompson, "An Operator Theory of Parametric Programming for t
Transportation Problem- 1," Nav. Res. Log. Quart. 79,205-226 (1972).
[13] Srinivasan, V. and G. L. Thompson, "An Operator Theory of Parametric Programming for t
Transportation Problem-II," Nav. Res. Log. Quart. 79,227-252 (1972).
[14] Soland, Richard M., "Optimal Facility Location with Concave Costs," Operations Research, 2
373-383 (1974).
[15] White, John A. and R. L. Francis, "Normative Models for Some Warehouse Sizing Problems
AIIE Transactions 3, No. 3, 185-190 (1971).
[16] Williams, A. C, "A Stochastic Transportation Problem," Operations Research 11, 759-770 (1963)
[17] Zoutendijk, G., "Mixed Integer Programming and the Warehouse Allocation Problem," mApplk
tions of Mathematical Programming Techniques, E. M. L. Beale, (ed.) (American Else
Publishing Co., New York, 1970) pp. 203-215.
v:
SOME COMPARATIVE & DESIGN ASPECTS OF FIXED CYCLE
PRODUCTION SYSTEMS
A. L. Soyster
Temple University
Philadelphia, Pa.
D. I. Toof
Arthur Young & Co.
Washington, D.C.
ABSTRACT
A serial production line is defined wherein a unit is produced if, and only if, all machines
are functioning. A single buffer stock with finite capacity is to be placed immediately after
one of the first A'-l machines in the A' machine line. When all machines have equal probability
of failure it is shown that the optimal buffer position is exactly in the middle of the line.
This result is synthesized with the earlier work of Koenigsberg and Buzacott including an
analysis of the covariance between transition states. An alternative model formulation is
presented and integrated with previous results. Finally, a sufficient condition and solution
procedure is derived for the installation of a buffer where there is a possible trade-off between
increasing the reliability of the line versus adding a buffer stock.
INTRODUCTION
One of the earlier problems considered in the operations research literature is the analysis of
tin types of production lines. In an often cited reference [13] Koenigsberg states that three major
lems in the design and operation of production lines are (a) the number of stages in the line, (b) the
ion of the buffer stocks, and (c) the capacity of these buffer stocks. Buzacott [4, 5, 6, 7] has
itigated these problems under various assumptions of machine reliability and product flow. For a
it survey of the pertinent literature see Buxey et al. [3]. In this paper we shall be concerned only
: the latter two questions in the context of a fixed cycle serial production line. In section 5 we com-
| and integrate our results with those of Koenigsberg and Buzacott.
The term production line or automatic production line is a concept that is, in general terms, familiar
snost everyone. However, much of the previous literature has not adequately described the actual
:< anism of product flow within the production facility. One considers a sequence of facilities spe-
iclly designed to convert raw materials into certain finished goods. In [13] Koenigsberg specifies
rtin general descriptors of production lines. A typical model of a production line could be as in
Ste 1.
□ m © m
Product Flow —
Figure l
437
438 A. L. SOYSTER AND D. I. TOOF
Here one has a production line with three stages and an allocation of space for a buffer stock betw i
stages 2 and 3. The output of stage 1 is the input of stage 2, while the output of stage 2 is the inpu
stage 3. The buffer stock is used as follows: if some part of the production line prior to the buffer h,
then the semifinished buffer units are available for the operation of the latter part of the production L
The buffer acts to decouple the production line.
In view of Figure 1 the three general questions posed by Koenigsberg seem reasonable, bi
number of fundamental questions concerning the actual mechanics of the product flow must be sj
ified. Answers to these questions will depend upon factors such as cycle time, transfer times, down t
associated with each stage and assumptions concerning a deterministic versus stochastic version
these parameters. One must specify the manner in which the product is transferred from the stan
the line to the last stage of the production line. One must also specify how units arrive for process
at the first stage of the production line.
One convenient dichotomy of production lines is whether units arrive at stage 1 in a rancj
manner or in some deterministic fashion. Actually this dichotomy is more specific, as it is gener
assumed that either units arrive in a Poisson fashion at stage 1 or that units are always available wi
needed at stage 1, i.e., an infinite supply. One of the earliest analytical models of automatic produclji
lines is for the following general case in Figure 2.
B-i) ■ ■ ■ ■ ■ ■ (B,v-,
Figure 2
Suppose that the capacity of each buffer B t is infinite and that units arrive at stage 1 in a Poisj
fashion and, furthermore, that the actual time to complete processing of a unit at each stage is an ;
ponential random variable. Burke [2] then shows that the departure distribution of stage 1 is actu;
Poisson. Hence the input rate to stage 2 is again Poisson, and the same argument can be repea
concerning the output distribution of stage 2, etc. These assumptions concerning Poisson arrivals;:
exponential processing times at each stage lead in general to very tractable analytical results c
cerning the system's parameters. For example, in [12] Hunt generates the mean number of units:
the system. When there is a finite capacity on the buffer size, the analysis of certain characteristic;
this sequential array of queues becomes considerably more complex, see e.g., [11].
In this paper we shall not take the approach that units arrive randomly at the first stage. The m \
realistic assumption is that units are available at the first stage whenever they are needed. Such
assumption in general is the appropriate one for the typical production line. In addition, the assumpt
is made that the processing time at each stage is a constant, i.e., fixed cycle production lines. A cert
element of uncertainty, though, is introduced via the reliability of each stage.
Along with other defining characteristics, Koenigsberg makes this dichotomy of random ver;i
deterministic arrival of units at stage 1. The two approaches are coined queuing and stochastic in [1
This paper would be correctly labeled the stochastic case. In the next section a definition of a set
production line is proposed. Along with the definition of a serial production line an alternative prod
tion line model is defined for comparison purposes. This alternative model underscores the need
accurately define the micromechanics of how units are to be handled and transferred along the prod
tion line. Apparently such considerations seem to attract little attention in the literature.
FIXED CYCLE PRODUCTION SYSTEMS 439
(ESCRIPTION OF AN EXACT SERIAL MODEL
Consider an TV stage production line where the output of machine i is the input t<> machine i+l.
v
Let Ti, a constant be the processing time of machine i and define the cycle time, T—\ Tj.
During each cycle each machine i has a known probability p,-, of failing. The probability of all
.v
lines working in a given cycle is ]~J q t , where q,= 1 —pi-
i=l
Raw material units from an unlimited supply are available as input to machine 1 and finished
s are the output of machine N. We shall assume that the machines in the production line behave
tgously to a set of lights connected in a series circuit, i.e., if one or more of the machines fail
the production line fails. At the beginning of each cycle a check is made to determine the status
ch machine. If each of the machines is up (not failed), a raw material unit is processed through the
N
md the probability that all machines will be up is Y[ q,. The distribution of up cycles of machine i
i=l
ometrically distributed with parameter qt, and the probability of successfully completing a unit in
A'
pven cycle is Y[ Qi- Note that during each cycle the failure probability, p,, of machine i, is constant,
i'=i
we assume a constant repair time of one cycle. This, of course, is equivalent to the assumption
:he distribution of repair time, measured in cycles, is geometric with parameter/),. In a subsequent
an we consider the serial model when the distribution of repair time is of a more general form. In
:ular we shall explore some of the implications when the status of machine i, i.e., up or down, is
ndent upon its status during the previous cycle.
Vow suppose that there exists an operator associated with each machine i. Consider what this
I model implies as to the mechanics of how the semifinished units are transferred throughout the
Suppose that at the beginning of a particular cycle all machines are up. In this case operator 1
sses a raw material unit at machine 1 and then transfers the semifinished unit to operator 2.
! operator 2 processes the semifinished unit apparently all the operators 1, 2, 4, ... N remain
'Note that during a cycle only one semifinished unit is ever handled on the production line. If at
eginning of a cycle one or more machines are down (failed), then this cycle is aborted and the line
i ns idle during the cycle time, T.
n section 5, we demonstrate that this definition is implicit in much of the earlier literature concern-
oduction line models.
he applicability of this serial model would depend upon a number of factors in any real world
uion. One application should include the case wherein a single worker (computer, another machine)
Kes the entire line, i.e., moves from machine to machine with the semifinished unit.
lext consider a simple production line with two machines, but between the two machines a buffer
b capacity of M units is installed (Figure 3).
□ 0m
Figure 3
440 A. L. SOYSTER AND D. I. TOOF
The buffer consists of units that have already been processed on machine 1. Consider a cycle when
machine 1 is down, but machine 2 is up. If the buffer is not empty, then a unit from the buffer can
processed through machine 2. The cycle is successful in the sense that a finished good unit is obtain
from the production line.
Next consider a cycle wherein machine 2 is down, but machine 1 is up. If the number of units
the buffer is less than M a raw material unit can be processed through machine 1 and placed in t
buffer. If the buffer is already at capacity no such transfer can take place. In either case no finished go
units for this cycle are discharged from the production line. Note that the size of the buffer stock v
vary between zero and M.
For the N machine line the question arises as to the optimal placement of a single buffer capaci
for there would be N — 1 possible positions. This question has earlier been considered by Buzacott [
in the context of certain approximating models of fixed cycle production lines. In the following sectii
we develop a mathematical model of this serial production line and focus on the determination of t
optimal placement of a single buffer capacity along the N machine line. The optimal placement will
that position which maximizes the steady state probability that a given cycle furnishes a finished gel
unit.
Before proceeding to the next section, we contrast the assumptions of the serial model with anotl
model. In [12] Hunt defines the "unpaced belt production line" as a particular case of a network
queues. Actually, in this case he permits no queues at all, except for an infinite capacity queue prior
the first machine, and furthermore, no vacant machines are allowed. Here the line is to move all at om
This definition is in the context of a network of queues with Poisson arrivals and exponential serv
times. For the case in which units are always available at machine 1 when needed, unit processing tirr
are constant, and when finite capacity buffers are permitted, a similar model can be defined. By t
"sequential relay" model with finite buffer capacities, we shall interpret the schematic of Figure 3i
the following manner: Each machine has a separate operator, and during each cycle (here defined
the maximum of the individual unit processing times, Ti, That is, T= max T\), each operator adhei
to the following rules:
(1) Operator 2 first:
(i) checks whether machine 2 is up;
(ii) if machine 2 is up, he checks whether an additional unit can be placed in the output buf
(which we assume has infinite capacity for the last machine in the line);
(hi) if the output buffer can accept another unit, he checks to see if there is a unit availal
from the input buffer.
If (i), (ii) and (iii) are all satisfied then operator 2 processes a unit on machine 2. Otherwise operato!
remains idle during this cycle. We assume the repair time of machine 2 is T, the cycle time.
(2) Operator 1 then:
(i) checks whether machine 1 is up;
(ii) if machine 1 is up, checks whether an additional unit can be placed in the output burl:
(iii) if the output buffer can accept another unit, he checks to see if there is a unit availal'
from the input buffer (which we assume has an infinite supply for the first machine in 1'
line).
If (i), (ii) and (iii) are all satisfied, then operator 1 processes a unit on machine 1. Otherwise operato
remains idle during this cycle. Similarly, we assume the repair time for the machine 1 is T, the cy*
time.
FIXED CYCLE PRODUCTION SYSTEMS 441
For an /V-machine line these rules can be expanded. The name sequential relay is used to accentu-
the principle that, in this model, before operator i can determine his status during a cycle, operator
1 must first discern his status. Note that in the sequential relay model during any given cycle as
ny as N distinct units can be processed on the TV machines. An important contrast between the two
dels is exhibited by Figure 3 when both machines are up and the buffer is empty. In the serial
del a finished good unit is produced, while in the sequential relay model only machine 1 would be
(rated, and then at the start of the next cycle the buffer would contain a single unit.
A similar question concerning the optimal placement of a single buffer capacity is applicable to
sequential relay model, although the analysis presented in the next section for the serial model is
jarently more complex for the sequential relay model. The difficulty lies in the fact that, by the very
inition of the sequential relay model, a buffer of size one must be associated with each of the N
chines.
MATHEMATICAL FORMULATION OF A SERIAL PRODUCTION LINE
For a serial production line with N machines there are N — 1 possible positions for the buffer. The
Fer can be placed after the first machine, after the second, after the third, . . . ., and, lastly, after
N— 1 machine. Consider now the general case where the buffer with capacity M is placed after
t'th machine, 1 =£ i ^ N— 1. We shall demonstrate that the number of units in the buffer define a
i N
:e Markov chain. Let a; = f| qj-, /3i = fJ Qh and let X„ be the number of units in the buffer at the
inning of cycle n. Now consider the probability distributions for X„+i for the following three cases:
(a) X n = 0\
(b) 0<X„<M;
(c) X„ = M.
case (a) X„+i — if
(i) All machines up during cycle n
(ii) At least one machine down before the buffer.
P[X ll + l = 0\X ll = 0] = a i p l +(l-a i ),
P[X ll+l = 1 | X, = 0] - 1 - [«,# + (1 -a,)] = a t - oc.fi,.
Similar analysis applies to cases (b) and (c), and the complete transition matrix for this finite,
ucible Markov chain given in Figure 4.
We now let X U) denote the steady slate distribution of the buffer size when the buffer is placed
l^diately after machine i. Define \, = a, — a,/3,, fii — fii — a;/3, and p, = X,7/A,. With this substitution
eransition matrix portrays a well-known form whose steady state solution (see [8]) is
ce
442
(1)
A. L. SOYSTER AND D. I TOOF
// I- Pi
p[XO> = t]= «
l-pW+
1
, M+l
I *
Ki 7^ /JLi
Xi — fJLi
for
=£ t =s M. With the steady state solution for X (i) known, it is now possible to determine the stead
state probability that a given cycle furnishes a finished good unit from machine iV:
P [finished good when buffer placed between machines i and i + l]
= P[Al\ machines up after the buffer] • {P[A11 machines up before the buffer]
+ P[At least one machine down before the buffer] • P[X {i) 5= 1]}.
This quantity would be
Pt{ai+{l-ai)PlX®& 1]}.
Note that
(2) P[X«^l]=>
fp<-pt /+1 , ,
A Hi
M
[M+l X,_/x '
1 2 . . . t—\ t t+1 . . . A/-1 M
\—<xi + aifii a, — a,(3i
1
(3,- aft, l-ai-Pi + 2otil3i a»— 'aj/Sj
2
/3; -afii 1 - a, - (3; + 2a,(3,
t
(3,-0,(3, l-aj/3i + 2a,/3, cn-oafy
M
Pi-aiPi 1 -/&+«(&
Figure 4. Transition matrix for buffer states.
FIXED CYCLE PRODUCTION SYSTEMS
443
nee to find the optimal placement of this single buffer with capacity M, one needs that particular
, ^ i sS N — 1, which maximizes
j8i{a < +a-ai)/ , [^ (,) > 111-
Now consider the simpler case of (3) when g, = q for all i. In this special case it follows that a, = q',
= q s ~ ' so that A. , = q' — q N , /*, = q N ~ ' ■ — q N and
te that if the buffer is placed immediately after machine /V/2, i.e., i = 7V/2, then X, = /u,,-. Hence
P[XW » 1] =
M+l
t^/V/2, then \, -^ u; and
P[*«'> 2* 1]
i- P r>
We now compute the probability for a given cycle of obtaining a unit from machine N when the
fer is placed immediately after machine /V/2. This probability from (3) is
JV-AT/2
g iV/2 + (1 _ q Nl2)
M
M+l
q X + (<7 .V/2 _ g .V)
M
M+l
le that as M becomes large this probability approaches q xl ' 2 , i.e., the second part of the line becomes
•pendent of the first /V/2 machines.
Next consider the general form for the probability of obtaining a unit from machine N when the
ler is placed after machine i ¥= /V/2. Again from (3) and (5) one obtains
q' + il-q
ft Pi - P'
1 -pt t + 1 J
after simplifying one obtains as the steady state probability, ssp(i),
ssp(i) = q N + fj.i
p? + l -pi
nil + 1
1 '
i * /V/2.
444 A. L. SOYSTER AND D. I. TOOF
We shall show that for all integer i =£ /V/2, 1 s= i =s /V— 1, that (7) < (6); in fact we shall show
that if ie[l, /V/2) U(N/2, N], then (7) < (6). Hence from hereon we shall treat i as a continuous variable.
The general strategy will be to show that the derivative of (7) with respect to i is greater than zero
for ie[l, /V/2) and less than zero for id/V/2, IV].
The derivative of (7) is obtained using the chain rule, and after simplification yields
(8)
Inq
{ [p M (p-l)(M+l)-(p M ^-l)]q i -p M [( P M ^-l)-(p-l)(M+l)]pq N - 1 }
(p A/ + i_ 1)2
Bernoulli's inequality [1] states that if p > 0, then
(p«+i _i)_(p_ 1)(M+ 1) =5 0, (strictly greater if p ¥■ 1, M 3*1)
so that dividing (8) by p M [(p M+1 - 1) - (p- 1) (M + 1)] does not change the sign of (8),
(9) Inq \ [p-»(p-l)(M + l)-(p A '+'-l)] ._ 1
(p M + 1 -l) 2 lp A/ [(p M + 1 -l)-lp-l)(M + l)] 9 pq J'
Now let
= [p M ip - 1) (M + 1) - (p» + 1 - 1)]
Y p M [(p M+l -iy-(p-l)(M+l)]-
The proofs of the lemmas below follow from induction and are omitted.
LEMMA 1: If 1 ^ i <N{2 then y(i) < 1 and if N/2 <i^N then y(i) > 1.
LEMMA 2: If 1 «£ i < N/2 then qi-pq"-* < and if /V/2 < i ^ N then ^-pq"- 1 > 0.
Since Inq «£ this means that the derivative of (7) is positive for 1 =S i < N/2 and negative foi
A72 < i =s /V. We therefore have the following result.
THEOREM 1: Assume that the number of machines, TV, is even. Then for any positive integer Af
and any q, the optimal placement of the buffer is immediately after machine /V/2.
PROOF: The proof of this theorem is essentially complete. We must only show that (7) is contin
uous at i = N/2. By definition the steady state probability of obtaining a unit from a given cycle when
i = 7V/2 is
M
q»+(q N l*-q N )
M+l
When i^N/2 then p— > 1 so that in
(7)
lim ssp(i)
i-»A72
FIXED CYCLE PRODUCTION SYSTEMS
hie indeterminate form 0/0. But an application of l'Hospital's Rule [9] shows that
445
Hm p(i) M + l -p(i) M
since
£*/2 p(i) u+l -l M+V
lim fji(i)=q N l 2 -q N
continuity is proven.
Q.E.D.
SEQUENTIAL RELAY MODEL AND COMPARISONS
In this section we shall consider the mathematical formulation of a two-machine sequential relay
lei and make some reasonable comparisons between the sequential relay and serial model per-
nance characteristics. Consider a two-machine sequential relay model with a buffer capacity of
inks (M 3s 1) placed between the two machines. Assume that machine i has reliability g, and that
i machines have identical unit processing times. If X„ is the state of the buffer at the beginning
ycle n then
P[Xn + i = 0\Xn = 0]= Pl
P[X n+1 = l\Xn = 0]= qi
e, if X n — 0, the second machine necessarily is idle during the nth cycle. This is the only difference
i the serial model. The transition probabilities from all other states would be identical to the serial
el, i.e., Figure 4 with a, — /3, = qu
The steady state distribution of the buffer size for the sequential relay model is for,
s£ t s= M, q x ¥= q-i
P[X = t]=<
Qz-qi
,M
Q\p'
92-91 p J q-iP\
92-91
\ qi-qx P M
t^ 1
t =
e p = q\p-il Wl .
When qi = q 2 = q
i l-i
P[X=t]=<
M+l-q
1
M+l-q
t =
l^t^M.
446 A. L. SOYSTER AND D. I. TOOF
The long run probability of producing a unit in a given cycle is
q-,-P[X^\]
or
gifr(l-p")
<72-<7ip"
and
qM
M+l-q
q\ J t q 2
qi = qz = q.
The crucial difference between the sequential relay and serial models is the manner in which I
goods flow through the line. For the sequential relay line units flow from machine to buffer to machii
whereas in the serial line units flow from machine to machine and buffers are only accessed in i
event of machine failure. The question arises as to the comparability of the two models. Consider l
case N = 2 machines. Assume that the reliability of each machine is q for both models, and assu
further that the cycle times are identical. This latter assumption implies that the machine process;
time for the serial model is twice as fast as that for the sequential relay model. But the sequential re
line will require two operators while only one is needed for the serial line. The following theorem p
vides bounds for the steady state output of the sequential relay line in terms of the serial line whi
N — 2. Recall from (6) that the steady state probability of producing a unit for the serial model whenl
buffer capacity is M is
M q(M + q)
THEOREM 2: Assume that the cycle times and machine reliabilities, q, for the two-machine sei.
and sequential relay lines are identical and that M ^ 1. Then
q(M + q) > qM ^ q(M-l+q)
M+l "M+\-q" M
i.e., the output of the sequential relay line with buffer capacity M is bounded above by a serial line wi
the same capacity and bounded below by a serial line with capacity M — 1.
PROOF: Suppose that the first inequality was not true, then
(M+l-q)(M + q) < (M+l)M
which after simplification yields
q < q 2
Similarity suppose that the second inequality was false, i.e.
M 2 < (M+l-q){M-l+q).
FIXED CYCLE PRODUCTION SYSTEMS 447
After some cancellation one obtains
0<-l + 2g-g 2
0<-(l- g ) 2 .
RELATIONSHIP AND COMPARISON WITH OTHER PRODUCTION MODELS
In this section we briefly review the work of Koenigsberg and Buzacott and present certain rela-
ships of their results with those of the present paper. But first we consider the scope and limitations
le serial model of section 3. The serial model of this paper characterizes the operating characteris-
of each machine by one parameter g,, i.e. the distribution of down time is geometric with parameter
1 — </,-. Suppose the distribution of down time cycles is given independently of o,. As an approxima-
one may attempt to model this more general case by defining 11, and Z), as the mean length of a
ad of up cycles and down cycles respectively and define
, . Pi
q > Ui + D,
, = Pi
Pi Ui + Di'
i q], p\ can be used in the serial model to obtain an approximation of the steady state output. We
consider the implications of this approximation in a simple but illustrative case.
The disturbing feature of using the approximations q\ and p'. is that Ui = Di = 100 cycles would
lice the same parameters as Ui — Dj — l cycle. Intuitively one would expect that the latter case
d be more desirable in view of the finite capacity buffer of size M. That in general this should be
is illustrated by the following example. Consider a two-machine serial production line with a buffer
city M = l. The first machine has reliability q, and the second machine is characterized by a two-
Markov chain as follows:
U D
U / a n a V i \
D \ a-2i a-2z I
probability that the second machine is up (U) during cycle n + 1 is an if the second machine was
, ring cycle n, while the probability is an if the second machine was down during cycle n. Of course
K~st machine is also described by a two-state Markov chain, but here both rows would be of the form
i . The steady state distribution for the second machine is of the form
K7uy2) \an + a v ra 21 +a v J'
448 A - L - SOYSTER AND D. I. TOOF
while its limiting covariance is
r=]im{E(X n -Xn + i)-E(Xn)E(Xn + i)} = yi(a n ^ yi )
where X„ = 1 corresponds to up and X„ — corresponds to down. The covariance for the first machii
is, of course, 0.
The two-machine serial line can be described via a four-state Markov chain where the state (1, V
means that a unit existed in the buffer at the beginning of cycle n and that the second machine was i
during cycle n:
W \D Of/ 0D
\U I qau qa V z pan par
ID
Of/
a-2i an
an ai2
0D \ qa>\ qa-ii pa>\ pa z
Note that if the system is in state (1, D) at the beginning of cycle n, then the first machine is blocke
during cycle n. Also note that if the system is in state (0, U) at the beginning of cycle N, then at th
beginning of cycle /V+l there can never be any units in the buffer. This is simply a consequence <
the definition of the serial line.
The steady state distribution of this four-state Markov chain is (tt\, tt-z, 773, 774) =
oa-21 qa-i-2. — q 1 {an — a 2 \) ga 2 i + 2 (an — a>\) pa 2 \
,a + a 2 i ' q + a-21
q + a 2
' q + a 2
when (yi, y->) is assumed equal to (g, p), i.e., both machines have the same steady state reliability <
Now the steady state probability of obtaining a unit from this two-machine serial system would b
7Ti + qTr 3 since a unit is always generated when in state (1, £/), and if the first machine is up a finishe,
good unit is obtained when the system is in state (0, U). The expression tt\ + o7r 3 can be simplified t
qjan + 012) +9 2
(o 2 i + ai 2 ) + 1
Next observe that the covariance, T, can be rewritten in the form
Y = pq [1 — (a 2 i + ai 2 ) ]
FIXED CYCLE PRODUCTION SYSTEMS
hat substituting for 021 + 012 yields for tt 1 + qir-A
pq(l + q)-r
449
2pq-r
serve now that since 2pq > pq ( 1 + q) , then tt 1 + 07rs is a decreasing function of T. Hence the steady
e output is larger when T is negative, smaller when T is positive. When T = one obtains {q + q 2 ) /2
ch corresponds to (6) when M= 1.
This example case indicates that when the covariance between X„ and X n + i is positive, then using
approximation q, p for the serial model overestimates the steady state output, while the reverse is
; if the variables are negatively correlated. In general, one would expect that positive correlation
ne typical case. Now consider an early production line model where the role of the covariance can
■asily examined.
In [13] Koenigsberg discusses a certain production line which he attributes to Finch. The Koenigs-
5-Finch (KF) model is for a production line with two machines and a single buffer stock with finite
acity M such as depicted in Figure 4. In the KF model each machine is assumed to be in either one
wo states, working or not working. The processing time per unit for machine i is a constant of
,1=1, 2. The distribution of working time for machine i is assumed to be the continuous exponential
lorn variable with mean 1/X„ and the distribution of down time is also exponentially distributed
1 mean l//i,. The problem posed by KF is the determination of the gain in productive output as
nction of the buffer size.
In the subsequent derivation given by KF it is assumed that the output of a two-machine line
1 a buffer can differ from the output of a two-machine line with no buffer only when the first machine
own. Hence the output of either line, buffer or no buffer, in the case when both machines are
king is the same. This is the assumption in the serial production line.
Observe now that \j//i, is the ratio of the mean down time to the mean up time for stage i. For
case kilfjLi = k 2 lp2—U and h\ = h 2 — h, KF show that the ratio of the expected increase in units
1 a buffer of capacity M divided by the expected units with no buffer is
U\l-
h(\ + U)(p i +p>)
h(l + U) (pi+ p->) + M (p { + p-,U) (n-t + fJuU)
we make the assumptions that
(1) both machines are identical, i.e. fii = fi 2 = (jL,Ki= k 2 =X and/ii = h 2 — h( units/time)
(2) only one machine is operated at a time during a given cycle, i.e. a single operator runs both
machines; this implies an actual production rate of h/2. Assume h is unity.
these assumptions (10) reduces to
M
M
K + p
450
A. L. SOYSTER AND D. I. TOOF
Substituting p' — X/(X + jU.), q' = /tx/(X + /a) which are respectively the ratio of mean down time a
mean up time to total time yields
(12)
P_
q'
M
M
X + /x
Now we multiply (12) by (q' ) 2 to yield the absolute increase in output due to the buffer capacity
(13)
qp
M
M+-
\ + /JL
Observe now that (13) is identical to the marginal gain for the serial line when N—2 and X + /jl =
The implication of X + /u, = l can be seen when the exponential assumptions of Koenigsberg are d
cretized to obtain a two-state Markov chain with transition matrix
U
U I 1 - XAt
D\ fikt
D
XAt '
1— fJL^t
where X is the breakdown/cycle, /x the repair/cycle, and A<= 1 the cycle length. When X + /x=l,tl
rows of the transition matrix are identical. Otherwise a covariance exists and X + fx can vary betwet
and 2 which means
M
M
(14)
M + -
X + /LI
M
In later work Buzacott [4, 5, 6, 7] has considered a wide variety of production line models. The;
papers consider random service times as well as fixed cycle service times. In particular, Buzacott
"Automatic Transfer Lines With Buffer Stocks" [4] deals directly with the question of positioning i
buffer stocks and related issues. A key assumption made by Buzacott [4] is that "If the fine has i
buffers it will be forced down as soon as any station breaks down." This is the assumption of the sen.
production line as contrasted with a sequential relay production line.
In [4] Buzacott examines the case where, in addition to a failure rate p, for the ith machiiv
the repair time for the ith machine is random with mean /?,-. For this model Buzacott has develop^
an approximating technique for estimating the mean throughput of the production line, and has a
serted that the throughput is maximized when the buffer is placed, so that the fine is separated int
two stages of equal reliability, e.g., the middle of the line if all machines have the same failure rati
Assuming that the probability of failure of two or more machines in the same cycle is negligible, Buzaco,
approximates the unbuffered reliability, A n ior an N machine line with
FIXED CYCLE PRODUCTION SYSTEMS 451
1
A =
1+2 P>Ri
the case when pi = p, Ri=l, this approximation is simply
1
A =
1+Np
actual reliability for this case would be q N where q—\—p.
Buzacott subsequently defines, A, the reliability of a two-stage (one buffer with capacity M)
uction line to be
A=A + P i h(M)
e Pi is the proportion of time that the first stage is under repair, and h{M) is the proportion of
the first stage is under repair that the fine is up. According to the approximating procedure if the
machine is under repair the second machine is necessarily up. If this simplifying assumption is
ped, then the proportion of time the first stage is under repair that the second stage is up would
•h(M). For the case Pi = p 2 = p, Ri = Ri= 1, then Buzacott shows that h(M)=MIM+l so that
A A i. M
A=A + p,
M+l
: P\=p. Now observe that if qh{M) is used in place of h(M) , then
M
A=A + p-q
M+l
A=Ao+(q-q 2 )M/M+l.
unbuffered reliability is q 2 , then this result concurs with Equation (6).
OST/RELIABILITY TRADEOFF FOR THE SERIAL MODEL
In this section we shall derive a sufficient condition for the existence of a buffer in the serial
(2I when there is a possible trade-off of either increasing q or creating a buffer. We shall assume
1 all machines have equal probability of functioning correctly. Let g(M) be the cost of creating a
It of size M, where Mis a continuous variable, and let h(q) be the cost of maintaining a performance
• of q for all machines. We shall consider the following maximization problem (MP):
452 A. L. SOYSTER AND D. I. TOOF
M
(MP) msixf(M,q) = q N + (q N l 2 -q N )
M+l
subject to: g(M) +h(q) «£ C
q^l
M, q^O
where C represents the funds available for maintaining the production line. We shall assume that boi
g and h are strictly increasing functions, and that when q is near 1 h(q) > C. The latter assumption
necessary for (MP) to be meaningful, for if h(l) «C, then one obviously could employ as an optim.
solution M = 0, q— 1. Some reasonable h functions could be
(i) hr(q)=biq r i 6, >0, r, >0
(ii) A 2 (g) = & 2 /(l- g ) r 2 b>>0, r 2 >0
where in (i) b\ > C so that Ai(l) >C. Furthermore note that h 2 (q) - >0 ° as q — >1. We assume thatg(i
+ h (0) < C; hence Slater's Constraint Qualification is satisfied [15] .
Now suppose that (0, q*) is an optimal solution to (MP), i.e., M = 0. Since f(M, q) is a strict!
increasing function of q for any fixed M 2= it follows that
h(q*) = C
or then
q* = h-*(C).
For (0, q*) to be optimal for (MP), (0, q*) must satisfy the Kuhn-Tucker necessary conditions [11
that (k*, \*) =s exist such that
(15) ^(0,q*)-k*g'(0)^0
(16) !/ (0i ^-xfA''(,.)-Xj*a !
Observe that based upon our assumptions concerning h, it must follow that q* < 1, and this in tun
implies that A.*=0. Furthermore note that (16) will be satisfied as a strict equality since q* > C
Hehce from (16) one would obtain
FIXED CYCLE PRODUCTION SYSTEMS
453
*_d£
^(0, 9 *)
K=-
h'(q*
i substituting (17) into (15) yields
er inserting the appropriate quantities into (18) one obtains
,*A72
q h'(q*)
Nq
*(N-i)
ich after cancelling a q* N l 2 and rearranging yields
,*Nj2
g'(Q)^v
lh'(q*)-q'
+ 1
THEOREM 3: A sufficient condition for the existence of a buffer (immediately after machine
) is that
,*,V/2
y
(q*)-q'
+ 1
<1
•re q* is the maximum attainable reliability with no buffer, i.e., h(q*) =C.
PROOF: If the hypothesis of the theorem is satisfied, then (0, q*) could not be optimal since (19)
tolated. Hence the optimal solution must be of the form (M, q) where M > 0.
The optimal solution to (MP) can be obtained in a straightforward manner that is especially well
ed to the case when M must be integer. At optimality the constraint g(M) +h(q) =S C is obviously
sfied as an equality so that q can be expressed as an explicit function of M, i.e., q = h~ l [C — g(M)].
ce, the objective function can be formulated as a function of M alone and the optimal solution can
ibtained by maximizing over the set of feasible values for M.
In summary we should analyze the relationship (19) more closely. Note that (19) would be violated if
(i) the marginal cost of buffer capacity at zero is small, i.e., g' (0) near zero.
(ii) the marginal cost of increased reliability at q* is high, i.e., h' (q*) very large.
(iii) the number of machines in the line is large.
5 that (iii) holds since
j*M2
[h'(q*
(0)/V
)q*
•+1
<», for Nq m -*0 since q< 1, see [15].
454 A. L. SOYSTER AND D. I. TOOF
ACKNOWLEDGMENT
The authors wish to thank Professor Matt Rosenshine of the Industrial Engineering Departmen
the Pennsylvania State University, for several enlightening discussions concerning this paper. I
particular, Professor Rosenshine brought to our attention the possible role of the covariance functioi
REFERENCES
[1] Bartle, R. G., The Elements of Real Analysis (John Wiley, New York, 1964).
[2] Burke, Paul J., "The Output of a Queuing System," Operations Research 4, No. 6 (1956).
[3] Buxey, G. M., N. D. Slack and Ray Wild, "Production Flow Line Systems Design — A Review
AIIE Transactions (March 1973).
[4] Buzacott, J. A., "Automatic Transfer Lines with Buffer Stocks," International J. Production Re
5, No. 3 (1967).
[5] Buzacott, J. A., "The Effect of Station Breakdowns and Random Processing Times on the Capacii
of Flow Lines with In-Process Storage," AIIE Transactions, 4, (1972).
[6] Buzacott, J. A., "The Role of Inventory Banks in Flow-Line Production Systems," The Inte
national J. Production Res. 9, No. 4 (1971).
[7] Buzacott, J. A., "Prediction of the Efficiency of Production Systems Without Internal Storage,
International J. Production Res. 6, No. 3 (1968).
[8] Clarke, A. Bruce and Ralph L. Disney, Probability and Random Processes for Engineers an
Scientists (John Wiley and Sons, New York, 1970).
[9] Courant, R., Differential and Integral Calculus, Vol. 1, Interscience (John Wiley, New York, 1964
[10] Freeman, Michael C, "The Effects of Breakdowns and Interstage Storage on Production Lin
Capacity," J. Indust. Engin. 75, No. 4 (1964).
[11] Hillier, Frederick S. and Ronald W. Boling, "Finite Queues in Series with Exponential or Erlan
Service Times — A Numerical Approach," Oper. Res, 15, No. 2 (1967).
[12] Hunt, Gordon C, "Sequential Arrays of Waiting Lines," Oper. Res., 4, No. 6 (1956).
[13] Koenigsberg, E., "Production Lines and Internal Storage — A Review," Man. Sci. 5, No. 4 (1959/
[14] Kuhn, H. W. and A. W. Tucker, "Nonlinear Programming," in Proceedings of the Second Sym
posium on Mathematical Statistics and Probability, J. Neyman (ed.) (University of Californi;
Press, Berkeley, Calif., 1951).
[15] Mangasarian, O. L., Nonlinear Programming (McGraw-Hill, New York, 1969).
DETERMINING ADJACENT VERTICES ON ASSIGNMENT
POLYTOPES
Patrick G. McKeown
State University of New York at Albany
School of Business
Albany, New York
ABSTRACT
To rank the solutions to the assignment problem using an extreme point method, it is
necessary to be able to find all extreme points which are adjacent to a given extreme solu-
tion. Recent work has shown a procedure for determining adjacent vertices on transportation
polytopes using a modification of the Chernikova Algorithm. We present here a procedure
for assignment polytopes which is a simplification of the more general procedure for trans-
portation polytopes and which also allows for implicit enumeration of adjacent vertices.
Introduction
In a recent paper, McKeown and Rubin [4] demonstrated that a modified and simplified version
the Chernikova Algorithm [3] could be used to generate adjacent vertices on transportation polytopes.
bis modification for transportation problems uses the special structure of the constraint set to reduce
e amount of checking required before two columns can be combined to form an edge leading to a
:w vertex.
However, this procedure procludes dropping edges for purely cost considerations, i.e., even if an
Ige could be shown not to satisfy some upper bound, it must be retained for checking later edges,
us would become very unwieldy if one wished to rank the vertices of the assignment problem in
n-2
ndecreasing order according to objective value. In such cases, there will be V (?) (n-i-l)\ edges
i=0
nerated [1]. Such a ranking procedure would be used in two situations. First, one might wish to
termine the best solution other than the optimal that would satisfy some sort of secondary con-
aints, possibly behavioral. Secondly, it is well known that various problems with assignment
nstraints can be solved by ranking the vertices of a related assignment problem in a manner similar
that presented by Murty [6] for the linear fixed charge problem. Examples of such problems are the
adratic assignment problem [2], the traveling salesman problem [8], and the bottleneck assignment
iblem.
We will present here a theorem which shows that, for the assignment polytope, we may avoid
•eping any columns of the Chernikova tableau which are unwanted from a cost standpoint. In so doing,
I are able to implicitly enumerate the adjacent vertices rather than actually finding all of them. This
;per will present a theoretical framework for such a procedure; a later paper will present the pro-
cure and computational results for various applications [5]. Before doing so, we will present a discus-
455
456 P- G. McKEOWN
sion and paraphrase of the Chernikova algorithm for transportation polytopes. For a more detaile
description, see [4].
II. THE CHERNIKOVA ALGORITHM FOR TRANSPORTATION POLYTOPES
The assignment problem may be formulated as follows:
Min J) £ C U*V
(1) S.t. %X tJ = l 7=1, ..'.,»,
T !
(2) 2*y=l, »=1, . . .,n, (P)
(3) XyX).
Since P is a special case of the transportation problem, the solutions will be all integer. In addition'
it is important to note that P is degenerate to degree n— 1. This degeneracy has caused difficulties in
finding vertices adjacent to any vertex of the polyhedron defined by the constraint set of P.
The Chernikova algorithm finds adjacent vertices by determining edges that connect the presen
vertex to the adjacent vertices. It should be remembered that this is geometric adjacency as comparec
to the algebraic adjacency associated with a simplex pivot. For a more detailed discussion of thest
concepts, see [4]. In either case, the edge leading to an adjacent extreme solution may be representee
in the context of the transportation or assignment tableau by a loop, i.e., a nondecomposable simple
connected path that connects one or more nonbasic cells to a subset of the basic cells. The
edge directions are found by starting with a nonbasic cell, assigning a — 1 and proceeding around the
loop alternately assigning +1, — 1, etc. If a +1 falls in a degenerate basic cell, then this edge leads
to an algebraically adjacent vertex but not to a geometrically adjacent vertex, i.e., we find another bask
representation of the present vertex. If no + l's fall in degenerate basic cells, then we have a geom
etrically adjacent vertex. It is this latter condition that the Chernikova algorithm generates. We wil
now describe that the algorithm for the more general case of the transportation problem.
Recall that in Tucker's tableau for some solution (Xb, Xn),
X H = B-*b + B- l N(-X N ) =*0
and
Xv = 0+(-/}(-Z.v) =5 0.
Then we may partition Xb into degenerate and nondegenerate variables as follows:
Xb= {Xq, Xd)
where
X Q = q+Q(-D N )2*0,
VERTICES ON ASSIGNMENT POLYTOPES 457
X D =0+D(-X N ) 3=0.
^Af=0+(-/) (-Xv) ^0, and <7>0.
Chernikova algorithm finds vertices adjacent to the present solution by determining the edges of
one C= {X\\ —DX.\ 2=0, X.\ 3 s 0}. To do this, first consider the matrix (~?). Chernikova's algorithm
s a series of transformations of this matrix which generate all edges corresponding to adjacent
ces. At any stage of the process, we denote the old matrix by Y— (%), and the new matrix being
rated is denoted Y. For transportation polytopes, all elements of Y are +1, — 1, or always [4].
a case, the simplex tableau less the nondegenerate rows can serve as the initial Y matrix for
nikova's algorithm. If the final tableau in the algorithm is (ff), then the rows corresponding to the
egenerate basic variables have entries —QL. For the transportation tableau, refer to columns of
Ihernikova tableau by two systems: a single subscript k indicates that we refer to the A:th column
e tableau; a double subscript refers to the cell in the assignment tableau associated with this
nn. For initial columns A: or ij (before Chernikova's procedure is begun) let R(k) — {i}, C(k) = {j},
{k) = {ij}- Note that at this point, I(k) contains the subscripts of the (n-1) 2 nonbasic variables.
,ow replace the matrix L with the index sets R, C, and /. Using these modifications, we may state
blowing:
VLGORITHM
>.l. If all elements of U are nonpositive, then the algorithm terminates with the edges contained
c).
1) Choose the first row of U, say row r, with at least one positive element.
2) Let R = {j\y r j < 0}. Let v = \R\. Then the first v columns of the new matrix, Y, are all the yj for
jeR, where yj denotes the y'th column of Y.
3) Let S= {(s,t) \y rs yrt < 0, s < t}, and I be the index set of all nonpositive rows of U. For each
(s,t)eS, let Ii(s,t) — {ieIo\yis — yn = 0}. We now use some of the elements of S to create addi-
tional columns of P.
(a) If R(s) DR(t) 7^0 or C{s) DC{t) #0, don't combine columns 5 and t. Get the next element
of S, and go to (a).
(b) For each column v ¥^ s,t, proceed as follows
(i) If I(v) (£l(s)Ul(t), go to the next v.
(ii) If I(v) Cl(s)UI(t), check to see if y iv =0 for all ieli(s,t) or if 7i(s,t)=0. If so, don't
combine s and t, get the next element of S, and go to (a),
(iii) If all v 7^s,t have been checked and none prevented the combination of 5 and t, then
combine columns s and t to form a new column of Y. This combination is made as
follows:
U s+ t=U s +U u
R(s + t)=R(s) U R(t),
C(s + t) = C(s) U C(t),
l(s + t) = I(s) U I(t).
458 P- G- McKEOWN
Get the next element of S, and go to (a).
(4) When all pairs in S have been examined, and the additional columns (if any) have been adt 1
we say that row r has been processed. Now let Y denote the matrix Y produced in process i
row r, and return to step 0.1.
Costs for adjacent vertices may be computed by adding a dual variable row to the Y matrix, 'j
set of checks in step (3)(b) above preclude dropping columns that can be shown not to satisfy sci
present cost requirement. This could happen if we did not want to generate edges which corresp j
to assignments with value worse than some upper bound. The next section will show that for such ca«
we need not keep these unwanted edges.
III. MODIFICATIONS OF THE CHERNIKOVA ALGORITHM
We may note from the above discussion of the Chernikova algorithm that there is a consider? \
amount of checking necessary to generate all adjacent vertices. It is also evident that columns iw
not be deleted due to cost considerations since they must be kept for the checking procedure. For >
assignment polytope, the following theorem will show that we need not do the (3)(b) checking, i,
only the columns s and t need be checked before they are combined.
THEOREM: For the assignment problem, consider the diagonal assignment (i.e., Ji,j=l, i=l, . .
n) with the n— 1 degenerate assignments all chosen to be in the first row of the assignment table.
If for two columns of the Chernikova tableau, say s and t, we have y rs y r i < 0, R(s) C\R(t) =0. a
C(s) C\C(t) =0, then columns s and t may be combined. We note that since we can renumber !
variables, any result about the diagonal assignment is valid for all assignments.
Before proving this theorem, we will state two lemmas which are essential to the proof of \
theorem:
First recall in the following lemma that the U matrix contains the degenerate elements of !
Assignment Tableau. We will then adopt the notation that Uk refers to the Arth column of U, and »
is the (i,k) element of that matrix. We will also use \Uk\ to mean the number of elements in Uk.
LEMMA 1: under the conditions given the theorem, for some columns of the Chernikova table,
we have
(i) \U S \ —0, 1, or 2; moreover, for some rows i and j we have
(ii) Ui S =— 1, if |f/ s |=l, and
(iii) Ui S Ujs =— 1, if |£/«| = 2.
PROOF: From the notion of a loop, we know that no row or column of the tableau can have mc
than two elements in a loop. Moreover, if there are two elements in the same row or column, they m '
have opposite edge directions. Since U s contains the edge directions in the degenerate cells for soi-
loop, (i) and (iii) follow directly. For (ii), assume that \U S \ = 1, and for any i, «,- s = l- This implies that tl
tth degenerate cell is connected to cell (1,1) with edge sign —1 and that cell (1, 1) is connected tu
nonbasic cell (k, 1) with edge sign + 1. But, this is a contradiction of the loop rules and u, s must ha 1
edge sign —1. t Q.E
LEMMA 2: Under the conditions given in the theorem, for any column s of the Chernikova tabid
(i) ttj g = +l=> (i, j+ l)el(s) for some i, and
(ii) uig=— 1=> (i + l, j)el(s) for some j.
PROOF (i): We know that uj s — + 1 implies that for some nonbasic cell s, the loop joinings
a subset of the basic cells has an edge direction of + 1 in the/th degenerate cell. Note also that the
VERTICES ON ASSIGNMENT POLYTOPES 459
ierate cell is also the 0+ D st ce U m tne first row. By the concept of a loop, the cell (1, j+ 1) must
nnected to some cell (i, j+ 1) with edge sign — 1. Let's assume that (i, j + 1) is basic. Since we are
ng with the diagonal assignment, i=j+l, and there are no other nondegenerate basic cells in
jw. If there is a nonbasic cell in this row, it must have edge sign — 1 and the basic cell (j+l,j+l)
have edge sign +1. But this contradicts the above assumption. Hence the cell (i,j+l) must be
isic and a member of I(s).
"he proof for the second case is similar. Q.E.D.
'ROOF OF THEOREM: If 5 and t are the columns to be combined where R(s) HR(t) =0 and
^C(t) =0, then by Lemma 1 there are three possible cases to examine. These are:
(i) u rs =— 1 and 3 j such that Uj S = +l
u r t — +l and 3i such that u, ( = — 1 for i ^j.
|ii) same as (i) except i=j.
iii) Urg = — 1
u r t = + 1 and 3 i such that u, s = — 1 .
'ill show that if s and t are the edges, then so is (5 + t). Note that if r=l, we can always combine
t. Hence, we may assume that, for r 7*= 1, 5 and t are edges, and attempt to prove that s + ns also
ge. We will now proceed to prove case (i) above:
]ASE I: If u rs = — 1 and Uj S = + 1, then there exists for j=ji a loop joining the elements (1, r + 1),
+ 1), a subset of the elements of the diagonal assignment and the elements of /, i.e., {(1, r+1),
(-)
,7+1), (r+1, £1), . . ., (ii, ii), (U,ji+l), (1, ji+l)} ., , , T ..
<+) (-) <+) (-> (+) with edge signs as shown. In particu-
/ Lemma 2 we know that (ii, ji + 1) and (r + 1, ii) are in this loop. Now if u r t=+ 1 and un = —l,
h a similar fashion we will have the following loop elements and edge signs for i = ik- { ( 1 , r + 1 ) ,
(+)
"H)» 0'i>./iK • • ••> (jkijk), (ik + l,jk), (ik + l, ik+l), (1, iA + 1)}. If we now compute the
(+) (+) (-) (+) (-)
f s and t, we have: {(l,y ( +l), (ii,j t + l), (ii, ii), . . . (r+ 1, ii), (r+ 1, r+ 1), (/1 > r + 1)>
(+) (-) (+) (-) (+) (-)
)> • • • (jkijk), (ik+l, jk), (ik+l, ik+l), (I, ik ■+ I)}- This is a nondecomposable loop repre-
(+) (-) (+) (-)
g the sum of 5 and t; hence it corresponds to an edge. Since a degenerate element, (1, ji+l),
positive edge sign, this column corresponds to an alternate basic representation of the diagonal
ments, i.e., an edge of zero length.
ases (ii) and (iii) follow in a similar manner except that they correspond to vertices other than the
al assignment. It is only necessary to show that these new vertices are adjacent to the diagonal
ment. This may be shown using the result proved by Murty [7] that all assignments adjacent
diagonal assignment are either traveling salesman tours or subtours with selftours. Thus in cases
(iii) we need only show that the resulting assignments correspond to one of these two possibili-
his may be done by looking at the result in case (iii) of combining s and t, i.e. {(1, 1), (it, 1),
+ -
. . ., (r+1, »i),(r+l, r+1), (/i.r+1), OWi). • • •> (Jk, jk), (i k + l,jk), (ik + h ik+l),
- + - + + +
h )}. In this case, the new assignment will contain those cells with an edge sign of (— ) and any
al elements which are not contained in this loop, i.e., {(it, 1), (ii-i, ii), . . ., (r+1, i',),
460 P. G. McKEOWN
(y'i, r+1), . . . (t/, + 1, jk), (1» t/, + 1)}, which is a tour (or subtour with selftours). Hence {s + t)
responds to an adjacent assignment. The reasoning for case (ii) is the same. Q.l
From an intuitive standpoint, what we have shown is that if 5 and t satisfy the conditions of
theorem, then the loops corresponding to the columns s and t may always be combined to form ai
nondecomposable loop.
A numerical example of this revised algorithm may be found by referring to [4]. The exarr
there is an assignment problem that has diagonal assignments but has degenerate cells not all in
first row. If one were to rearrange the degenerate cells to satisfy the theorem, that example may ea
be reworked without the zero checks present in that algorithm.
REFERENCES
[1] Balinski, M. L. and Andrew Russakoff, "On The Assignment Polytope," SIAM Review 16, 516-
(1974).
[2] Cabot, A. V. and R. L. Francis, "Solving Certain Nonconvex Quadratic Minimization Probk
by Ranking the Extreme Points," Oper. Res. 18, 82-86 (1970).
[3] Chernikova, N. V., "Algorithm for Finding a General Formula for the Nonnegative Solutions I
System of Linear Inequalities," U.S.S.R. Computational Mathematics and Mathematical Phys
5,228-233 (1965).
[4] McKeown, P. G. and D. S. Rubin, "Adjacent Vertices on Transportation Polytopes," Nav. F«
Log. Quart. 22, 365-374 (1975).
[5] McKeown, P. G. and Donald Bishko, "Using the Chernikova Algorithm to Rank Assignment
in preparation.
[6] Murty, K. G., "Solving the Fixed Charge Problem by Ranking the Extreme Points," Oper. R
76,268-279 (1968).
[7] Murty, K. G., "On the Tours of a Traveling Salesman," SIAM Journal on Control 7, 122-131 (196
[8] Murty, K. G., Caroline Karel and John D. C. Little, "The Traveling Salesman Problem: Solut
by a Method of Ranking Assignments," Unpublished, Case Institute of Technology (196
IMPROVED CONVERGENCE RATE RESULTS FOR A
CLASS OF EXPONENTIAL PENALTY FUNCTIONS
Frederic H. Murphy
Federal Energy Administration
Washington, D.C.
ABSTRACT
An improved theoretical rate of convergence is shown for a member of the class of
exponential penalty function algorithms. We show that the algorithm has a superlinear
convergence rate.
RODUCTION
An exponential penalty function algorithm is presented in [9] for the following nonlinear
ramming NLP:
minimize f(x)
X(E n
:ct to
gi (x) ^ for i = 1 , . . .,m
an optimal solution x*. The penalty function is
m i
Fk(x) =/(*) - y — exp [r k gt(x) ]
* a 5* 5a 2 s 1 and a — > oo. This class of penalty functions is contained in the class of penalty
ions discussed in Evans and Gould [3]. And O'Neill and Widhelm [12] successfully used a variant
is penalty function as the Lagrangian of a modified version of Dantzig-Wolfe generalized
amming.
HOD
i variant of (3) is considered where we can analyze its asymptotic convergence properties. It is
l that in the limit it has the convergence properties of method of multiplier algorithms [13] with
,°. This version of (3) also meets one of the important concerns with method of multiplier type
thms, that is, it is everywhere second order differentiable if /(.), and gi(.), . . ., gm(-) are. Our
hm is:
hep 1) Minimize
m \k
/(*) + 2-iexp ir kgt (x)]
, = , r k
461
462 F. H. MURPHY
with the minimum at x k .
(Step 2) Set
(5) yf=Afexp [r k gi(x k )],
then let
(6) Af + 1 = min[£/ A ,max [L fr + 1 ,yf]]
where U k >L k + l >0, U k ■-* °° and L k + 1 ^>0, and r k L k+ i^>°°, and U k /r k ^>0 as &-><».
CONVERGENCE CONSIDERATIONS
Since this penalty function is a member of the class of exponential penalty functions as describ
by Evans and Gould [3] we need not prove convergence here. To establish our convergence rate resi
we make the following assumptions:
1. NLP is a convex programming problem.
2. Slater's constraint qualification [7] is satisfied.
3. Either f(x) or one of the binding constraints has a positive definite Hessian at the optin
solution.
4. The feasible region is compact.
5. Strict complementarity holds, that is,gj(x*) =0 implies A* > where A* is the optimal Lagran
multiplier for constraint i, i=l, . . ., m.
6. A*, . . ., A* are unique, x* is a unique optimal solution.
7. { Vgi(x*)\ gi(x*) = i = 1, . . ., m) are linearly independent.
Assumption 4 and continuity as implied by assumption 1 guarantee convergence of the algorithm.
[10] it was shown that under assumptions 1 and 2, y k , . . ., y^ are uniformly bounded. As this is,
standard proof (see [4] ) , it is not repeated here. In constructing a dual function for a modified versit
of NLP we need assumptions 3 and 7 to be able to invert a matrix in the dual function.
LEMMA 1: Let / C {1, . . ., m} be the index set of the binding constraints at the optimal solut
With assumptions 1, 2, 5, and 6,
(7) Mm r k gi(x k )=0 for iel.
PROOF: For Ul, there is a K such that for k^K
(8) M +1 = min [U k , max [L k+l , y k i ]]=y k i .
Since y^ — > A? as k — » °° by the uniqueness of A* , A^" — * A* as k -* °°. That is,
(9) kf +l - \ k t = A? exp [r k gi (x k )] - \f
= k k [exp [r kgi (x k )-l]].
10
EXPONENTIAL PENALTY FUNCTION CONVERGENCE 463
Since k l i +1 -k k i -+0 as £->oc a nd A* >0,
exp [r k gi{x k )]^> 1 as k -» °°
r k gi(x k )-*0 as k -> ». D
ke the algorithms that fit into the classification of methods of multipliers [5], [8], [11], [13],
e no trial Lagrange Multiplier is ever zero, with this algorithm we bear the computational burden
arrying along nonbinding constraints within the unconstrained minimization. To deal with this
)lem we have to analyze the behavior of the nonbinding constraints.
LEMMA 2: There exists a K such that for k 3 K, and ifl, A^ = La.
PROOF: For i^I, since gi(x*) <0 there exists an e >0 and a k such that for k 2* K
i gi{xk--i) =s-e.
is,
exp [r k -igi(xic-i)] =£exp [— r*-ie].
iequently,
yf- 1 ^kf- 1 esp [— r fc -ie]^ — — m-iexp [— r fc _ie].
0,-1
'; Uk-\lr k -\ — * and r k -\ exp [— n—je] — »0, there exists a K' such that the right hand side of (13)
Is than L k because L k r k - i - * °° as k — » °°. Choose A: 3= max (X, £' ) and the lemma holds. D
Without loss of generality, let 7={1, . . ., p}. We may then formulate a set of new nonlinear
jams NLP1:
m
minimize f (n, x) =f(x) + Y L k /r k exp [r k gi(x)]
*««" i= P+ i
1 subject to gi(x) = fori=l, . . ., p.
Siow construct a penalty function for each nonlinear program in NLPl. We use the same penalty
lion as before with the r k used in the penalty function the same as that used in the objective func-
nl4). Letting x k be the optimal solution to this penalty function we see that x k —>x* as k— »°°;
large enough by Lemma 2 *A=iA-; and for r k large enough (14) is arbitrarily close to (1) over the
M le region defined by (2). We can now apply the duality results in Luenberger [6, p. 321]. We
6: the dual function <j> of an element of NLPl for a given value of r k as:
464
(16)
F. H. MURPHY
<f>(k) = minimum \ f(x)+ Y [L k /r k ] exp [r k gj(x)]
jceE" I i=p+1
+ 2 [Xi/r*] [exp [r kgi (x)]-l] }
f (r k ,x)+J i [Xilr k ] [exp[r,^U)]-l] ]
minimum
where k— (ki, . . ., k p ).
The function </> is convex since we assumed f(x), gi(x),
Now by [6, p. 321] with k s= & as defined in Lemma 2
., gm(*) are convex.
(17)
V0(X*)
(18)
Let Vg(x) =
(19)
let A(X) =
l/r A [exp [r A gi(Aj,)]-l]
_l/r A - [exp [rA-gpUA-)]'-l]_
Vft(*)"
_V&(*)_
Xi •
• • • x„
exp [rA^i(x)] . . .
(20)
let H(r k ,x) =
exp [r A -&,(x)]
letiv (tat,*) be the Hessian of f (r k ,x) andletG,(x) be the Hessian of gi(x) fori=l,
From this we can construct the Hessian of <})(k k ):
(21) H(r k ,x k )vg(x k )
F (r k , **) +2 M' exp [r fr # (**)]&(**)
+ r k vg(x k ) 'A(k k )H(r k , x k )v gi (x k )
vgi(x k )'H(rk,xt.
EXPONENTIAL PENALTY FUNCTION CONVERGENCE 465
inverse of the matrix in the brackets exists since this matrix is positive definite. That comes
it since in assumption 3 we assumed positive definiteness of the Hessian of one of the functions
, g, (x), . . . , g P (x) and all other terms in this matrix are positive semidefinite. Now (21) can be
ten as
M k = H(r k , x k )vg{x k )
F (r k , x k ) +^ X\ exp [r k gi{x k )]Gi(x k )
+ r k vg(x k )'A(k k+l )v gi (x k )
vgi(x k )'H(r k , Xk)'.
nalyze (22) we now prove a simple extension of a lemma in Mangasarian [8].
LEMMA 3: Let C(a) = B(A(a) +B'Q{a)B)- 1 B' where B is a given mXn matrix of rank m,
) is an nXn matrix function on R, Q(a) is a differentiable mXm matrix function on R, A(a) +
'a)B is positive definite for a 3= a for some a, and for every e > there is an a € such that for
a ( , \A (a) — A | < e where A + B'Q(a)B is positive definite for a 5= a. Then there exists a constant
ix D such that for every 8 > there is an as with
\C(a)-(Q(a)+D)~ l \ <8
Ss as.
PROOF: Define C,(a) = B(A + B'Q(a)B- 1 )B' . By the continuity of the inverse of a matrix,
very 8 there exists an eg such that
|C(a)-Ci(a)|<8
4{x) —A | < eg. This means there exists an as such that (24) holds for a 3= as.
The formula for differentiating the inverse of a matrix is
diCija)-*) , dC x (a) „
da "" Cl(a) ~da~ Cl(a) ■
entiating Ci (a) we have
dCA*) = D d(A + B'Q(a)B) ' g ,
da da
aa
_ „ . . dQ(a)
--Ci(a) —j C,(a).
da
»nce
d(C,(a)- 1 ) _ dQ(a)
da da
and C l (a)- 1 = Q(a)+D.
466 F. H. MURPHY
That is,
(27) Ct(a)~(Q(a)+D)-K
Substituting (27) into (24) we have (23).
For r/, sufficiently large there is an e < where g; (*/,•) < e for i — p + 1, . . . , n. Letting^ be V
Hessian oif{x) at xi and A{r k ) be the Hessian oif (ri,, x) at jca for n, = a, we know that A (r t , ) -»
as r/, .— » oo. Let Q(rk) =n-A(k k + 1 ) and let 5 = vg (*/,). Remembering that H{r t; , x/.-) - »/as r fr ->|
we have the following theorem:
THEOREM 1 : If NLP satisfies 1 , . . . , 7, then the limit of r,M, ; as k -> oo i s A ( X* ) - *.
Also, we know from (9) for i= 1, . . . , p
(28) X,* + ' - Xf = X* [exp [r kgi (kit ) ] ~ 1] .
Consequently,
(29) X A + 1 = r /1 .X*A(X fc )v«p(X*) 1 .
Since /•/,-M/,-— > A _1 (X A ") as A •— > °°, we see that, for large r, A(A') approximates the inverse oft
Hessian of <p( A.).
This is important for the following reason. The ith component of the gradient of <p(\ k ) is
(30) v,<? ( X A ) = 1/r,, [exp [r,,g, (x A ) ] - 1 ] for i = 1 , . . . , p.
The gradient of <p is the direction of steepest ascent for maximize <p. By multiplying the gradient by t 1
inverse of the Hessian matrix of <p we have the search direction of the Newton-Raphson technique. F'
A large enough
(31 ) X? + » = X? [exp [r A .£j (** ) - 1 ] ] + \f .
Since A _1 (X A ') approximates the Hessian of <p(\ K ) for A large enough, A(X A ") approximates the
verse of the Hessian. This means that the direction of improvement in going from Xf to A A + 1 in (31)
approximately that of the Newton-Raphson technique. This property is termed asymptotic Newton
[8].
CONCLUSIONS
The importance of an algorithm having the asymptotic Newton structure comes about in t
following manner. The rate of convergence of steepest ascent is linear, and the rate improvement
a function of the ratio of the difference of the largest and smallest eigenvalues of the Hessian to t
sum of these eigenvalues. In the algorithm presented here we have steepest ascent search directio
on <p(\) within a vector space translated by an estimate of the size of the multipliers. Within t
transformed space, the Hessian more closely approximates the identity matrix as a is increase
EXPONENTIAL PENALTY FUNCTION CONVERGENCE 467
lging the largest and smallest eigenvalues closer together. This means that the factor that governs
rate of convergence is improving as a increases. As a consequence an asymptotic Newton algorithm
a superlinear convergence rate.
Whether this algorithm is superior to the algorithms that do not have the second order differentia-
y property is not clear. There is more computational effort at each iteration because nonbinding
straints still can affect the calculation of the Hessian for the unconstrained minimization long after
/ would have been dropped with the other algorithms. Also, r^ must be increased to infinity in the
t. The choice of algorithm should be dependent on whether the discontinuity of the Hessian is a
jlem.
BIBLIOGRAPHY
j Bertsekas, Dimitri, "On Penalty and Multiplier Methods for Constrained Minimization," Depart-
ment of Electrical Engineering, working paper, University of Illinois at Urbana, Champaign
(Apr. 1974).
| Bertsekas, Dimitri, "Combined Primal-Dual and Penalty Function Methods for Constrained
Minimization," SIAM J. Control 13 (1975).
Evans, J. P. and F. J. Gould, "An Existence Theorem for Penalty Function Theory," SIAM J.
Control 22(1974).
Fiacco, A. V. and G. P. McCormick, Nonlinear Programming: Sequential Unconstrained Minimiza-
tion Techniques (Wiley, New York, 1968).
Hestines, M. R., "Multiplier and Gradient Methods," J. Optimization Theory and Appl. 4, 303-320
(1969).
Luenberger, D. G, Introduction to Linear and Nonlinear Programming (Addison-Wesley, Readinsr.
Mass., 1973).
Mangasarian, O. L., Nonlinear Programming (McGraw-Hill, New York, 1969).
Mangasarian, O. L., "Unconstrained Methods in Optimization," Computer Sciences Technical
Report #224, University of Wisconsin (1974).
Murphy, F. H., "A Class of Exponential Penalty Functions," SIAM J. Control 12 (1974).
Murphy, F. H., "Topics in Nonlinear Programming, Penalty Function and Column Generation
Algorithms," Ph. D. Dissertation, Yale University (1971).
Murphy, F. H., "A Generalized Lagrange Multiplier Function Algorithm for Nonlinear Program-
ming," Discussion paper 114, Center for Mathematical Studies in Economics and Management
Science (Oct. 1974).
'O'Neill, R. P. and W. B. Widhelm, "Acceleration of Lagrangian Coleman-Generation Algorithms
by Penalty Function Methods," University of Maryland — College Park (Jan. 1975).
'Powell, M. J. D., "A Method of Nonlinear Constraints in Minimization Problems," in Optimization,
R. Fletcher, ed. (Academic Press, New York, 1969).
ESTIMATION OF STRATEGIES IN A MARKOV GAME
Jerzy A. Filar
Monash University
Melbourne, Australia
ABSTRACT
In this paper a two-person Markov game, in discrete time, and with perfect state infor-
mation, is considered from the point of view of a single player (player A) only. It is assumed
that A's opponent (player B) uses the same strategy every time the game is played. It is
shown that A can obtain a consistent estimate of B's strategy on the basis of his past experi-
ence of playing the game with B. Two methods of deriving such an estimate are given. Fur-
ther, it is shown that using one of these estimates A can construct a strategy for himself
which is asymptotically optimal. A simple example of a game in which the above method
may be useful is given.
ATURE OF THE PROBLEM
Ve shall consider a Markov game (players A and B) with perfect state information and in discrete
from the point of view of player A only. A's objective is to maximize his expected payoff func-
(x, y), where x denotes A's own strategy and y denotes B's strategy. We impose the following
tions:
) The players have no knowledge of their opponent's strategy or payoff functions.
i) A knows that B uses the same strategy, tj, every time the game is played.
ilearly, it would be in A's interest to gather information about B's strategy, which would help him
choice of his own strategy. Hence the problem is to devise an estimation procedure which will
i player A to form a reliable estimate, -f), of B's true strategy -q. Now, by maximizing the function
)^with respect to x, A could obtain a strategy x* (not necessarily unique) for himself, which we
equire to be asymptotically optimal subject to B continuing to use tj. That is, if 17 is based on a
number of past plays, then v(x*, 17) will approximate better to max v(x, 17). The maximization
" all possible strategies for A. .
iXATIONSHIP TO EXISTING THEORIES
1 1967 J. C. Harsanyi [2] developed a new approach to games with incomplete information. He
id that under certain conditions such games can be reduced to equivalent games with complete
nation involving some random elements, for which Nash equilibrium points can be found. Har-
I theory initiated research into repeated games of incomplete information by R. J. Aumann,
tischler, J. F. Mertens, S. Zamir and others; for example, see [1], [3], [7], [8], and [9]. These are
i in which each player has only partial information about a chance move that takes place at the
ting of the game. Two alternative approaches were applied to the often-repeated games of this
•^he first is to consider the n-times repeated game F,, and its value v„ and then find out lim v„ if
' s. The second approach is to treat directly the infinitely repeated game T x and find its value
469
470 I- A. FILAR
f x when it exists. The rate of convergence of v„ to its limit has also been discussed in some cl
Further, in 1969 J. E. Walsh in his "Median Game Theory" (see [5]) recognized the fact that |
librium point solutions may not be satisfactory criteria for optimality in many game situations su :
economic and military games. He differentiated between situations where the players behave i
tectively and/or vindictively, since assumptions about the players' motivation affect the choice
optimality criterion.
The game considered in this paper is a multistage game with incomplete information in whl
special assumption is made about the behavior of one player, and thus it resembles the games (
tioned above. However, no attempt is made here to solve the game in the equilibrium strategy s i
Instead, a statistical technique is proposed for estimating the "missing information" as well as a <
nique for utilizing estimates thus obtained to construct a strategy for one player which is better i
the minimax strategy. Further, we show that the strategy based on our estimates converges in pj
bility to the "optimal" strategy for one player, provided that a special assumption about the pi s
holds. This approach is motivated by the idea that when playing a game against a particular opp< I
one can benefit from prior information about that player's strategy. The method suggested ml
paper is straightforward and effective in practical applications.
3. A MARKOV GAME IN DISCRETE TIME
We shall use Zachrisson's definition of a Markov game (see [6]) with some modifications. A sit
of its structure is given below.
Suppose we have a two-person game which at any time of play is in one and one only of N id
fiable states, we number them 1,2,.. ., N-l, N. We shall assume a finite time of play, T*, and ]
in reverse time, i.e., the game starts at T* and ends at 0. Players A and B choose their strategy pc.
eters at discrete points of time T*, T*-l, . . .,t + l,t, . . . 2, 1. Thus, if the game is in state i ati
t + 1, A and B will choose a pair of strategy parameters Xj(t + 1), y-,(t + l) respectively. We note:
the players' strategies are functions of both time and state. The game is then transferred to amj
state where it will remain until the next decision time at the reverse time t. We write,
(3.1) Pij(t)= Pij (t; Xi(t + \), yt(t+l))
= P r{game goes to state j at time t/it was in state i at t + 1 and A and B used stratii
xt(t + l), yi(t + l) resp.}
where jeS= {1,2, . . .,A^}. The corresponding transition matrix can be written as:
(3.2) P{t) =P{t; x(t+l),y(t+l)) = ( PlJ U))f J=1
where the vector
x(t+l)= /xi(t + l)
x 2 (t+l)
Xy(t+\) i
MARKOV GAME STRATEGIES 471
;s /4's complete strategy at time t+1. Vector y(t+l) is defined similarly. Thus the game is
lined by a sequence of T* matrices: P(T*-1), . . .,P(t),. . ., P(2), P(1),P(1),P{0).
'e deviate from Zachrisson in the definition of the players' payoff functions. We define:
[ A) {t)=tvf.(t\ Xj(t + 1), yi(t+l)) as A's payoff in state j at time t, provided that game was
in state i at time t + 1 and players' strategies were Xi(t+1) and yi(t + l) at that
time. Where t = l, . . ., T*-l.
tie matrix:
;d player A's current payoff matrix at t. We define a similar payoff matrix W B (t) for player B,
general we shall assume that w*. (t) + w B (t) ¥" 0. Further, we let:
(t ) = player A's expected payoff from time t until the end of the game, if the game is in state
i at time t, where t=l, 2, . . ., T* and we call the vector v A (t) = (v A (t), v A (t),
. . ., v^(t)) T player A's value vector at time t, with v B (t) defined similarly.
e would expect the value vectors at successive times to be related to one another by some kind
irsive equation. It turns out that if we define the matrix operator 3) by:
@(A) = (011O22, • • ., on\) t = a vector of diag. elements of A where A is any matrix
A = (a ij ) N i , j= ,
e following backward equations hold:
v A (i)=3{[P(0)][WH0)y}
v A (t+l) = @{[P(t)][W A (t)] T } + P(t)v A (t)
■'■ = 1,2, . . ., T* — 1. Equations (3.5) simply state that for each i,
Vi A (,+i) = y Pu (t)[w u A (t) + Vj A (t)]
3 what we would expect. We shall define the distribution vector at time / by: q(t) = (qi(t), q 2
,qi{t)) T ior t = 0, 1, . . ., T*-l, T*, where q, ,(t) =Pr{game is in state i at time t).
•umed that the initial distribution vector q(T*) is known. It is easily verifiable that:
q(t) = [P(t)] T q(t + l) forf-r*-l, . . .,2,1,0.
472 J. A. FILAR
Since in the remainder of this paper we shall consider only player A , s payoffs, the supers-
A in Vi A (t), wtj A (t), W A {t) and v A (t) will be omitted from now on.
4. DERIVATION OF ESTIMATES OF B's STRATEGY
Postulate (ii) of section 1 ensures that every time the game is in state i at the (f + l)th dec
point, player B uses a strategy parameter -n,. Suppose that in order to estimate t), A plays the a
with B n times and keeps a record of all transitions which eventuate. We shall assume that di
those n plays A also uses a fixed strategy, f. £ consists of a set of T* vectors £(t).
Let K, = the number of times the game was in state i at (t+ l)th decision point,
and Sjj — the number of times the transition i —>j was observed in time t + l—*t.
Clearly
N N
V Ki = n and V Sjj — Kj.
Kj and Sy are random variables with the distributions Bj(n, qi{t)) and B,{K\, ptj (t) ) respectiv:
Now, since A knows his own strategy, Pij(t), as in (3.1), is a function of tj, only. Let us write a
Pij(t) = fiiVi) where 17/ will usually be restricted to some interval [a;, b{\ by the nature of the gai
It is natural for A to take the statistic
Sij
-tt~ to be an estimator of Pu(t).
Then an estimate 17, of 17, can be found by solving the equation:
(4.D T=M^ forT ><-
Further, if fj is a one-to-one function we can write
(4.2) r\i~Si \jA where gj=ff 1 ,
the inverse function of fj. It should be clear that with this approach any one of the functions/;!;
7 = 1, 2, . . ., N could be used for the purpose of estimating tj,. In some cases the average of the 1
mates thus obtained could be appropriate.
An alternative method of deriving an estimate of 17, is by the method of maximum likelihood. <
the vector of observations for a given K[ be: S= (Sn, S, 2 , . • • Si\)eW CI 2ft N where it is possiblt
list all elements of W . Let us write y, = yt(t+ l)e[a;, bi] where 17, is the true value of y,. We shallf
fine r}i to be the m.l.e. of rj; if:
(4-3) L(rn\K t , S)= . sup L(yt\K h S),
y,e[ai,bi}
MARKOV GAME STRATEGIES
473
:re
L( yi \Ki,S) =
U
SnlSal . . . S w !
n [pijWu
ie likelihood function since the joint distribution of the S,/s is clearly multinomial. The following
ilts are special cases of the results proved in Rao [4], on pages 293-298.
a. If, (1) the functions Pij(yi) for 7=1, 2, . . ., N are continuous in y, on [a,, 6,-], and (2) Given
> 0, there exists c:
f N
inf \y pu(v>
y- vl \>S ijri
)log
Pij(Vi)
i, a m.l.e. r)i exists and converges to 17, with probability 1 as /£,•—» <».
b. If the condition (2) of (a) holds and the functions Pijiyi) admit 1st order derivatives which are
tinuous, then the m.l.e. r^ is asymptotically normally distributed.
Should we desire to construct a confidence interval for 17; we could use the normal approximation
tie distribution of 17. Otherwise, for any y ie\_at, b{\ we could construct a region W a { y i) C W such
Pr{SeW a ( yi )\ yi }>l-a.
Then if we observe a vector S* eW such that S* eW a { y ,){oY y,e[/i, l-z] C [a,, 6,], we can state
Pr{l l ^r) i ^l 2 \S*}>l-a.
lit is also worth mentioning that estimators Sij/Kj and 17, derived in (4.1) and (4.2) are consistent
>ij(t) and 17, respectively. The distribution of these estimators is difficult to determine; however,
■ moments can be either found exactly or be approximated to by standard techniques. In particular
: define E
[|j I £, = ()] = PU (0, then
Var \kJ ~ n [Pijit)Y U(*+l)Py(0"iU+T)
1 this — > as n— » °°. Further, if we assume that gj — fr 1 and its first two derivatives g' 1 ' and g< 2
and are bounded at the point ptj(t)=E tt , then it follows that:
2)
E(i}i)=E
-©]
by (4.2)
*+|-^ 1) (P0(0)'Var(|f),
474 J. A. FILAR
and
(4.9) Var ft) - Var (^j [[g^( Pij (t))] 2 -\ IgfKpM)] 2 Var (|j)}-
Thus E(i)i) — * -r)i and Var ft) -* as n — » °° and 17, is a consistent but biased estimator of n,.
A confidence interval for Pij(t) can be obtained using the fact that for K t = h the random variat
(4.10) Z= , ^-^(0 apprQX- m 1}
VA iP y(t)(l-py(0)
So, from normal distribution we can find t a : Pr{\Z\ < t a } = 1 — a, which can be written as:
(4.11) Pr{li(Sij I k t ) < Pij (t) < l-ASu I hi)} = 1 - a
where l\, h equal
t& 1 t'O j jl a
^Oijta
(2S U + t 2 )T\ " kj . .
207TIO respectively.
Further, if the function gj — fj~ x is monotone increasing, we have that:
(4-12) Pr{«[Z,(Su|k)] < r,, < g j [Z 2 (S u |A=0]} = 1 ~«,
with the inequalities above simply reversed if gj were monotone decreasing. Note that the width
intervals in (4.11) and (4.12) tends to zero as ki~* », provided thatg, is continuous as well.
COMMENTS:
1. Confidence limits for tj, conditional on Ki — ki are even more useful than if they were com
tional on n, for to estimate B's strategy in ith state at the (t + l)th stage, player A would want to
serve a sufficiently large value for Ki.
2. Note that since we were considering a typical time interval U+l) — * t, the argument (t) w
dropped in this section. Strictly, we should have written; 17 ;U + 1), Ki(t + 1), Sij(t), etc.
3. To obtain an estimate of 6's complete strategy, the estimation procedure outlined above wou
have to be repeated for each of the states 1, 2, . . ., N and at each decision point T*, T*-l, . .
t, . . .,2,1.
5. DERIVATION OF A's STRATEGY
Let us represent ZTs true strategy for the whole game, 7}, by a set of T* vectors:
MARKOV GAME STRATEGIES 475
17(0= /iiO V ' =1 ' 2 r *-
172(f)
We shall define player A's optimal strategy, x°, as the set of vectors x°(£); t= 1, 2, . . .,7* which
sfies the following equations:
) see (3.5))
v°(l)=v(x°(l),r,(l))
= max^{P(0;*(l),-n(l))[r(0:*(l))] r }
U(D) - =
+ l) = v(xO(t+l),r,(t+l))
= max {^{[P(f;x(«+l),r ? (^+l)[r(0] r ]+/ > («;*(f+l),'n(f+l))t; (f)}}
UU+1)) = ~ ~ = =~ ~ _
= 0, 1, 2, . . ., T*—\, the maximization over (x(t+l)) means that the ith row equation in (5.2) is
imized over xt{t + 1) which will usually be restricted to a bounded interval [c,, d{\ specified by the
B of the game.
Suppose now, that by following an estimation procedure of section 4, A obtains an estimate, 17,
sisting of vectors fi(t)t= 1, . . ., T*) of 5's strategy. Then by following the maximization process
i.2) with vectors 17(f) replaced by j){t), A will obtain a strategy, x*, (consisting of vectors x*(t)
,2, . . .,T*) and a new set of expected payoff vectors: v*(t) = v(x*(t), rj(t) )t= 1, 2, . . .,T*.
le next section a comparison will be made between the effects of employing the strategies x* and x°.
Further, if initially A has no idea as to what 17 really is, he might adopt a pessimistic approach,
assume that B plays directly against him. Thus in Equations (3.5) for each admissible y(t) he can
>; out
V(y(t))= max v(x(t), y(t)) t = l, . . ., T*
x(t) ~ ~
|h yields a vector, tx(t) = tx{y(t)) such that
V(y(t))=v( l x(t),y(t)) t=l, . . ., T*.
476 J- A. FILAR
Then according to the pessimistic assumption A thinks that B is using a strategy, y, consisting of vi
tors y(t) t=l, . . ., T* such that:
(5.5) V(y(t))=mmV(y(t)) t=l, . . ., T*
Thus, during the experimental plays of the game A will employ a strategy, £, consisting of vecti
g(t) = ix(y(t)) t=l,. . ., T* and he will expect his value vectors to be:
(5.6) V(y(t))=v(Ht),y(t))
= min max v{x(t),y(t))
y(t) x(t) ~ ~
for t—1, 2, . . ., T*. Whereas in fact his value vectors will be given by
(5.7) iv a (t)=v(((t),r)(t)) t=l, . . ., T*
Similarly, if A employs x*, his actual value vectors will not be v*(tYs but:
(5.8) v a (t)=v(x*(t),r,(t)) t=l,...,T*
6. EVALUATION OF THE ESTIMATION TECHNIQUE
Clearly, the strategy, x*, based on the estimate 17 will be asymptotically optimal if the vectc
pr
*v a (t) -*v°(t) for f=l, 2, . . ., T*
We shall show that this is in fact so under quite general conditions. Consider the tth row of Equatio
(5.2)-(5.8) only. If rj is obtained as in section 4, we have that
rn(t) -* 17,(0 as ki(t) -* 00.
Now, if we assume that Vi(xj(t), yt(t)) is jointly continuous in Xi(t) and yi{t), then it can be sho\
that Vi(yi(t)) is continuous as well and hence that
(6.1) v*(t) = ViiMt) ) ^ V>(vM ) as hit) -> 00,
but
pr
(6.2) v*{t)-vf(t)=Vi(x* (t), r h (t))-v*(x*(t), vM) ~* as k t (t) -* 00;
therefore,
MARKOV GAME STRATEGIES
477
pr
vfU) "* Vi{riM) = vf(t) as k,(t) -» °°
jquired. Further, by looking at Equations (3.5), we see that v,(x;(t), y-,{t)) will be continuous if
functions pij(t — l, x,{t), y,(t)) and W-,j{t — \, x-,{t), yi(t)) are continuous. Note that all state-
ts about continuity need to refer only to the intervals of interest specified by the rules of the game.
e the recursive equations involve only finitely many steps, the statements about asymptotic opti-
:y can be proved by induction for each t= 1, . . ., 7"*.
In practice, when playing the experimental games, player A will want to know "how soon" can he
using, x*, with a reasonable confidence that this will prove better than, £. For a typical i and t
can be illustrated as follows in Figure 1:
,v,(t)
MO
fhus, player A requires a criterion for deciding how large a value of k,{t) needs be observed before
; n assert with at least (1 — a) 100 percent confidence that the use of x*{t) is better than of gi(t),
v?(t) > ivf(t)). One such criterion is derived below.
Suppose that when estimating 77 , ( f ) we construct a (1 — a)100 percent C.I. [Li, L 2 ]=/as in
1 Let us define the following quantities
S= max Vi(xf(t), yi(t)) and s= min v, ;\x*{t), y, (t))
y,U)tl UiiOel
S' = max Vi(t;i(t), yi(t)) and 5' = min i>,-(£/(0> yiU)),
yiU)el y,(t)el
Pr{s^.*vf(t) «S} >1— o
Pr{s' ^ l v*(t)^S'}>l-a.
* since the width of / — > as k-,(t) — * <» we can find k*{t) for which S' < s (see fig. 1), then with
Ibility of at least 1 — a the use of the estimation strategy x* (t) will yield a better expected payoff
478 J. A. FILAR
than the minimax strategy tji(t). Hence a complete set K— {k*(t): i — l, . . ., iV and £ = 1, . . .,
can be constructed and a corresponding general stopping could be: A continues to use, £ until e\
member of K has been observed.
COMMENTS:
1. If the numbers T* and N are large, quite clearly a very large number of observations ma\i
required. However, if some states are very unlikely, player A may not bother to estimate fi's strat'
in those states.
2. A major drawback of the above approach is that player B might change his strategy soon a
A starts using x*. Thus the benefit to A could be short-lived. On the other hand, if B is a "conservati '
player or if his returns remain satisfactory this may not happen.
7. A SIMPLE EXAMPLE t
Suppose we have two contractors, A and B, bidding at discrete points of time for a contract (wbt
has to be renewed monthly, for instance). Let state 1 correspond to B getting the contract for a mot
and let state 2 correspond to A getting it. Players' strategies in state tare quotes (scaled down) for ;
contract X\, y,e[0, 1] i= 1, 2,. We consider a single-decision game, i.e. T*= 1. Suppose that
(7.1)
and that A's payoff matrix is:
1
/l +Xi — Ji
l—xi+yi
P o )=-
~ 2
\l+*2 — y-i
l—x z +y2.
ex \
(7.2) W rA (0) = [ ) where c> 1,
ex-.
i.e. if A doesn't get the contract, his payoff is 0, and if he gets it his payoff is c times the quote he g<?
(which was scaled down). Then from Equation (3.5) we have that A's expected payoff vector is:
(7.3) „(l)=(''"(*">"j)=!c /*+•**-«*
K v 2 (x 2 ,yi)/ 2 \x 2 + x 2 y2-xi
Let player fi's strategy (unknown to A) be represented by the vector ri— ( ). We shall assume tl:
the game starts in state 1. Thus A is interested in estimating 771 only, and in maximizing t;i(*i,Tj
1 1
Now, the maximum of ~c(x\ + x\y\ — x'\) for a given y\ occurs when xi = — (1 +yi), so we havefn
(5.3) that
tThis example is a special case of a general Markov game model of duopoly which I have constructed recently. There J
a number of reasonable ways of analyzing the general model, depending largely on the assumptions about players' motiva I
What is done here, however, is intended purely as an illustration of our statistical technique.
MARKOV GAME STRATEGIES 479
Vi(yi)= max v x (xi, yi) = -c(l + y,) 2
xie[0, 1] O
Vi (0) = min V x (y x ) = - c, and that
<im) o
while £i = iXi(0) = - (1 + 0) = -. Thus during the experimental plays A will use £i —-r. While he
ng 6 = 2' tne f unction /i (Ti) = Pn(£i,yi)= 2 (^"~ yi )' S ° the f unction £' ( z ) =/"i 1 (2) z= 2 — 2z
itinuous and monotone decreasing. Thus if we use/i to estimate 171 we have as in (4.2) that:
„ 3 S„
that in this caseSn determines S i2 as well, for a given K\, so that the m.l.e. rj will not be any better
^1 above.)
Jow, suppose that after 40 plays (each starting in state 1), A observed s u = 10, Si 2 = 30 (/Ci = 40
3 1
irse), then 7ji = ~— 2j = l, and if he requires a 90% C.I. for tji, he can take a = .l, and by the
»d of section 4 he will find that Pr{.1709 =s p n *= .3608} = .9 and similarly that, /V{.7784 =£ -n, =£ 1}
rom here A obtains x* by finding
max v 1 (x\, Tji) = max \-c (2x x —x\) \ — ~
jmc[o, 1] (xi) 12 J 2
ix\=x l =1, and his actual expected payoff when he uses x*— 1 will be:*v l a = v i (xi*r) l )=-c-r) X .
tius by the 1st equation of (6.4) Pr{. 3892c =£ v" =£ .5000c} > .9. Whereas if A uses the minimax
ty fi = - his expected payoff will actually be: x v° = V\ (£ 1, 771 ) =c I- H -)..
2 V8 4 /
>, by the 2nd equation of (6.4) Pr{. 3196c *£ iv? s= .3750c} > .9, which satisfies the stopping rule
le number of trials as small as 40.
Cowledgment
1 would like to thank Professor J. S. Maritz of Monash University with whom I discussed the
Wilts of this paper on a number of occasions.
480 J. A. FILAR
REFERENCES
[1] Aumann, R. J. and M. Maschler, "Repeated Games with Incomplete Information. A survey of'
cent results". Report to the U.S. Arms Control and Disarmament Agency, Washington, [',
Final report on contract ACDA/ST-143, Ch. Ill (Mathematica, Princeton, N.J., Sept. 1967).
[2] Harsanyi, J. C, "Games with Incomplete Information Played by 'Bayesian' Players: Parts I,;
III." Man. Sci., 14 (1967).
[3] Mertens, J. F. and S. Zamir, "The Value of Two-Person Zero-Sum Repeated Games" Internatici
J. Game Theory 1 (1971).
[4] Rao, C. R., Linear Statistical Inference and its Applications (Wiley and Sons, New York, 19ii
[5] Walsh, J. E., "Discrete Two-Person Game Theory with Median Payoff Criterion" Opsearch, 6 (1%
[6] Zachrisson, L. E., " 'Markov Games' Advances in Game Theory" Annals of Maths. Studies. Stu
52 (Princeton University Press, 1964).
[7] Zamir, S., "On the Relation between Finitely and Infinitely Repeated Games with Incomplete
formation" International J. Game Theory 1 (1971).
[8] Zamir, S., "On Repeated Games with General Information Function". International J. Game The,
2 (1973).
[9] Zamir, S., "On the Notion of Value for Games with Infinitely Many Stages". The Annals of £■
tistics 1 (1973).
OWSHOP SEQUENCING PROBLEM WITH ORDERED PROCESSING
TIME MATRICES: A GENERAL CASE*
M. L. Smith, S. S. Panwalkar, and R. A. Dudek
Texas Tech University
Department of Industrial Engineering
Lubbock, Texas
ABSTRACT
The ordered matrix flow shop problem with no passing of jobs is considered. In an earlier
paper, the authors have considered a special case of the problem and have proposed a simple
and efficient algorithm that finds a sequence with minimum makespan for a special problem.
This paper considers a more general case. This technique is shown to be considerably more
efficient than are existing methods for the conventional flow shop problems.
RODUCTION
In [4], Smith, Panwalkar and Dudek have defined a subcategory of the n job, m machine problem
d an "ordered flowshop problem." A flowshop problem is called an ordered problem if the following
iroperties are satisfied:
1) If a particular job has a smaller processing time on any machine than does a second job on the
machine, this implies then that the processing time of this first job is less than or equal to the
:ssing time of the second job on all corresponding machines.
2) If one job has its yth smallest processing time on some machine, this implies that every other
'ill have its jth smallest processing time on the same machine. Beyond these special properties
le processing times, the usual assumptions for the general flowshop problems [1] also apply to the
ed matrix problem. One important assumption is that of no passing of jobs.
In an earlier paper Smith, Dudek and Panwalkar [3] first introduced the ordered flowshop problem.
enumerated several hundred problems and observed two interesting characteristics for the
ed problem. First, if the maximum processing time for every job in an ordered problem occurred
ie first (last) machine, then the sequence obtained by arranging jobs in descending (ascending)
of processing times always represented a minimum makespan sequence.
secondly, if the maximum processing time for every job in an ordered flowshop problem occurred
i intermediate machine, then there was always a minimum makespan sequence such that a subset
os was arranged in ascending order of processing times followed by the remaining jobs in de-
ling order.
iased on these observations Smith, et al. [3] proposed two algorithms for the ordered flowshop
em, one for the special case when the maximum processing time for every job occured on the first
his research was partially supported by the National Science Foundation C,rant GK 2869.
481
482
M. L. SMITH, S. S. PANWALKAR AND R. A. DUDEK
or the last machine, and the second for the general case. Although complete enumeration showed u
all problems attempted were optimally solved by the algorithms, no proofs of optimality were develop.
Recently, Smith et al. [4] were able to develop the proof of optimality for the algorithm for «
special case of the ordered flowshop problem. In this paper we will present the second algorithm d
show that the algorithm will generate at least one optimal solution.
PROPOSED ALGORITHM
To solve the ordered matrix problem the following special algorithm was developed. This metl j
uses a form of limited enumeration with 2 n ~ 1 sequences being formed. The steps for the algorithm < :
Step 1. Rank the jobs according to ascending processing times on the first machine. (In castf
ties, break the ties by considering the processing times on the second or third machine if necessa )
Step 2. Place the lowest ranking job which has not been sequenced into the leftmost unfilled -
quence position in each partial sequence being constructed. Also place this same job in the rightmt
unfilled sequence position in each partial sequence. Note that the number of partial sequences doub;
as each job (other than the highest ranking job) is added.
Step 3. Repeat Step 2 until the first n — 1 jobs are placed into every sequence; then the high t
ranking job is to be placed into the only unfilled position.
Step 4. The 2" _1 sequences are evaluated for makespan, and sequences with the lowest makespi
are optimal.
A 4-job, 4-machine ordered flowshop problem will be solved to exemplify the above procedu.
Processing times are shown in Table I. The matrix representing the processing time is called i
ordered matrix. Figure 1 shows the partial sequences formed at each stage of solution and the co
pleted sequences. The optimal sequence is 2 4 3 1 with a makespan of 524.
Table I. Ordered Matrix Problem
Machine
Job
A
B
C
D
1
42
56
44
32
2
66
88
69
51
3
69
92
72
53
4
89
119
93
68
Further improvements in the computation time of the algorithm may be made by the followi
modification. In Figure 1, we notice that the solution development is similar to that of the branch ai
bound method (although many branches considered in the general problem are eliminated). Thus
computing suitable upper bounds on various nodes, it may be possible to eliminate many branch*
This idea has not yet been incorporated in the computer program that was developed. Such a procedu
will limit the total program of sequences that need to be generated to 2"" 1 or less, and hence the coi
putation times for problems may be reduced from the values above.
One characteristic of all the sequences generated by the algorithm is that once the position
the largest job is fixed, all jobs preceding it are always arranged in an ascending order of processir
FLOWSHOP SEQUENCING PROBLEM
483
Figure 1. Graphical representation of the solution development
L from the job in position 1, to the job appearing to the left of the largest job. Similarly, all jobs
:eding the largest job are arranged according to the descending order of processing times. A
i;nce satisfying this characteristic (ascending order followed by descending order) will be con-
:3d as having a "pyramid" structure (see Appendix for further discussion).
lie optimal sequence for an ordered flowshop problem, as described in the preceding section,
i imal only if we limit consideration to permutation schedules (schedules with the same order of
•m all machines). This can be seen by using the flowshop problem in Table II. This problem meets
i the requirements for ordered matrix problems and has sequence 1 2 as the optimal permutation
•lule with a makespan equal to 29; however, the general schedule given in Table III has a make-
TABLE II. A Two-job Ordered Matrix Problem
Job
Machine
A
B
C
D
1
2
8
5
6
2
4
1
7
4
484
M. L. SMITH, S. S. PANWALKAR AND R. A. DUDEK
Table III. General Job Sequence for the Problem in Table II
Machine
Sequence
A
B
C
D
1 2
1 2
2 1
2 1
span equal to 28. From this problem it can be concluded that there may be general schedules wi
lower total elapsed time than that of the optimal permutation schedule for ordered flowshop probler,
DISCUSSION
Complete enumeration of an n-job problem with no passing involves generation of n\ sequenc
The proposed algorithm, on the other hand, generates only 2"" 1 sequences. Thus for a 10-job proble,
the complete enumeration and the proposed algorithm will generate 3,628,800 and 512 sequencr
respectively. Since no other (general flowshop type) optimizing algorithms were available for solvi
10-job problems efficiently, no computational comparisons could be made. It was felt that problei
with up to 15 jobs can be handled in a reasonable amount of time. As the computation time is det
ministic, Table IV gives computation time for problems in each of the three categories as obtained
an IBM 370/145. The algorithm was programmed in FORTRAN IV.
Table
IV. Computation Times
Problem size
n X m
No. of sequences
Computation time
(sec.)
10X20
10X50
15X20
512
512
16,384
11.5
27.5
535.7
The optimizing solution procedure for the flowshop ordered matrix problem is very efficie
when compared with optimizing procedures for the general flowshop problem. The existing optimizi
methods can efficiently solve problems involving relatively few jobs. In fact, it has been suggest-
by Karp [2] that these problems will remain intractable perpetually. Karp considers a solution pi
cedure to be satisfactory if an algorithm terminates within a number of steps bounded by a polynonn
in the length of the input. For the sequencing problem the length of the input is equivalent to tl
number of jobs. The proposed algorithm may not be satisfactory from this point of view; however, tl
ordered matrix procedure is a substantial improvement over all optimizing algorithms for the genei
flowshop problem.
APPENDIX
Let N represent a set of n jobs and M represent a set of m machines. Let Py denote the processii
time of job i on machine j.
FLOWSHOP SEQUENCING PROBLEM
485
Let N be partitioned into two mutually exclusive sets <J\ and cr 2 ; cri contains at least one job and
nay be empty (we will assume without loss of generality that the job with the largest processing times
lways included in at). Theorem: For an n job m machine ordered matrix flowshop problem with
ind a> as defined above, there exists an optimal makespan sequence of the form crta> where jobs
r x are arranged in ascending order of processing times followed by jobs in a> arranged in descending
;r of processing times.
To prove this theorem we will make use of theorems and proofs in [4]. Figure 2 gives a schematic
resentation of a typical sequence S (using n~\5 and m = 6) where for convenience the job in
ition i is labeled as job i. Let k denote the job with the largest processing times and t be the machine
i the largest processing times.
Mathematically, the two properties of the ordered flowshop problem stated earlier can be given as
)ws:
(i) If P ir > Pjr for i, jeN and some reM, then P, x 2= P jx for all xeM.
(ii) If P ir > P ix for some ieN and r, xeM, then P yr 2= P yx for all yeN.
h these properties we have P^ 2 s P>j for all ieN and all jeM, where k and t represent the largest
and machine (in terms of processing times), respectively.
Using contradictions and properties of the ordered matrix it is easy to show that the critical path
always pass through P^. As an example, in Figure 1, k = 5, t = 3, and the solid lines represent a
cal path giving the makespan.
1
2
3
4
5
6
1
2
3
4
5
5
5
6
6
7
7
8 9
10
10
10
10
1
1
1]
1]
I 12
I 12
12
12
13
13
13
13
14
14
14
14
15
1
1
1
1
1
2
2
2
2
2
3
3
3
3
3
4
4
4
4
4
8
9
15
V
c
is
6
7
8
9
15
2
5
5
5
6
6
6
7
7
7
8 9
15
8
9
10
1]
12
13
14
15
15
8
9
10
1]
12
13
14
Job
FIGURE 2. Schematic representation of makespan for sequence S.
Now consider a subproblem involving the first k jobs in sequence S and the first t machines. For
subproblem the maximum processing time for all jobs occurs on the last machine (machine t).
i problem then can be solved by the algorithm described in [4] which states that the minimum
espan sequence is given by arranging jobs in ascending order of processing times.
Returning to the original TV X M problem, it can be seen that arranging jobs prior to job k in ascend-
order will reduce the length of critical path (mathematically, it will not increase the length) and
;e the makespan.
Using similar arguments, i.e., considering a subproblem which includes jobs £, k+\, . . ., n
machines t, t+ 1, . . ., m, one can show that arranging jobs in descending order will improve the
espan.
486 M. L. SMITH, S. S. PANWALKAR AND R. A. DUDEK
Finally, consider two subsets of TV given by en and <r> denned earlier. As mentioned earlier, let tl
largest job be included in cr\. Let en include i jobs (i=l, 2, . . ., n). There are / n — \ \ ways
\ i-l)
which one can choose jobs in en. Hence there are I . _ I sequences in which jobs in en can be si
ranged in ascending order followed by jobs in cr 2 in descending order. In all, there are a total of
n-l ,
sequences. Note that each of these sequences has a pyramid structure. Also, the proposed algorith
generates 2"" 1 sequences having pyramid structures. From the theorem above, at least one optim
sequence has the pyramid structure. Hence the algorithm will generate at least one optimum sequenc
REFERENCES
[1] Johnson, S. M., "Optimal Two and Three State Production Schedules with Set-up Times Included.
Nav. Res. Log. Quart., 1, 61-68, (1954).
[2] Karp, R. M., "Some Remarks on the Complexity of Combinatorial Optimization Problem," pr<
sented at the 41st Meeting of ORSA, New Orleans, Louisiana, Apr. 1972.
[3] Smith, M. L., R. A. Dudek, and S. S. Panwalkar, "Job Sequencing with Ordered Processing Tim
Matrix," presented at the 43rd Meeting of ORSA, Milwaukee, Wisconsin, May 1973.
[4] Smith, M. L., S. S. Panwalkar, and R. A. Dudek, "Flowshop Sequencing Problem with Ordere
Processing Time Matrices," Man. Sci., 21, 544-549(1975).
MEAN SQUARE INVARIANT FORECASTERS
FOR THE WEIBULL DISTRIBUTION*
J. Tiago de 01iveira§
Faculty of Sciences
University of Lisbon
Lisbon, Portugal
and
Sebastian B. Littauert
Columbia University
New York, N.Y.
ABSTRACT
Many techniques of forecasting are based upon extrapolation from time series. While
such techniques have useful applications, they entail strong assumptions which are not ex-
plicitly enunciated. Furth ermore, the time series approach is not based on an indigenous
forecast principle. The first attack from the present point of view was initiated by S. S. Wilks.
Of particular interest over a wide range of operational situations in reliability, for ex-
ample, is the behavior of the extremes of the Weibull and Gumbel distributions. Here we
formulate forecasters for the minima of various forms of these distributions. The forecasters
are determined for minimization in mean square of the distance. From n original observations
the forecaster provides the minimum of the next m observations when the original distribution
is maintained.
For each of the forecasters developed, tables of efficiency have been calculated and
included in the appendix. An explicit example has been included for one of the forecasters.
Its performance has been demonstrated by the use of Monte Carlo technique. The results
indicate that the forecaster can be used in practice with satisfactory results.
INTRODUCTION
A basic scientific problem which has many ramifications is that of estimating future process out-
tes from an observed set of outcomes. This is a problem of long standing, that is, in essence, the
iamental problem of forecasting. This problem can be, and has been, approached in many ways.
it is not often recognized and expressed as a problem of forecasting. An early (the first) fundamental
tribution to forecasting future process outcomes from a given set of, say, n process outcomes was
le by S. S. Wilks [22, 23] in his pioneering papers on "Non-parametric tolerance intervals" in
ch he introduced a new statistical interval, referred to in the literature either as a tolerance or a
liction interval. It turns out that Wilks' results have great practical usefulness as well as simplicity
elegance which are hard to match.
*Dedicated to the memory of E. .1. Gumbel.
§Research of this author supported in part by the Calouste Gulbenkian Foundation and NSF Grant GP 8478 while at Colum-
Iniversity.
tResearch of this author supported in part by NSF Grant GP 8478.
487
488 J- TIAGO DE OLIVEIRA AND S. B. LITTAUER
Currently, much attention is being focused on the reliability of manufactured product as expresse
in terms of product life. Very useful results in this connection have been obtained by Johns an.
Lieberman [9] for product life obeying the Weibull form of distribution. The authors did not, howevei
express their results directly in the form of forecasting about some future outcomes of the proces
based on, say, n previous observations.
Study of the particular problem of devising ways of giving measures of product life, which i
important and challenging in itself, is perhaps more significant when approached from the point of \ie\
of forecasting. Most forecasting effort has been directed to smoothing techniques when a forecas
mode has been assumed — providing merely bounds of variation for an assumed forecast relation.
The present approach, while taking off from an assumed distribution function, provides as a fon
caster a function of n+ l, rc + 2, . . ., n + m future possible process outcomes, based on the values o
a set of n previous outcomes. The criterion for choice of the estimation-forecast function is the min
mization of the mean square distance between the observed values and the ones forecast. This approac
is quite different from current practice, and it is hoped that the present paper has some significance, bot,
operationally and theoretically, for the general scientific problem of forecasting as contrasted t
smoothing.
1. THE PROBLEM
In a previous paper, Tiago de Oliveira [20] , some results were derived about the direct computa
tion, from observed values, of mean square forecasters and minimal average length forecasting regions
These results are here applied to obtain mean square forecasters for Weibull and Gumbel distributions
In a subsequent paper [21], lower bounds for the mean-square error of forecasting were obtained, thus
providing a technique to evaluate the efficiency of the proposed forecasters. Here only a brief sketcl
will be given of the general results to be applied.
First, recall that the 3-parameter Weibull distribution function
(1) W(X;\,8,k) = 0, ifx^k
= 1 — exp
'-(*?)']
\i x^k
where k is the location parameter, 8 ( > 0) is the dispersion (location) parameter, and k (> 0) is the
shape parameter. If k is known, k = ko, considerable simplification results. Also, assuming the lo
cation parameter k = \ , one obtains by the transformation Y=— In {X — k ) the Gumbel largest values
distribution
/ Y-i
exp l —
where ^ — — In Sand t— 1/k. Here £ is the new location parameter and ris the new dispersion param
eter. We will consider only the cases where there are but two parameters, (k, 8), (k = ko) or (£, t).
(X = \oh in the Weibull distribution or the Gumbel distribution, respectively.
2. THE GENERAL RESULTS ON QUASI-LINEAR INVARIANCE
Consider a sample of n observations (x\, x-i, . . ., x n ) whose distribution is known about a loca
tion parameter k and with respect to a dispersion parameter 8. Suppose further that a subsequent
WEIBULL DISTRIBUTION FORECASTERS 489
ence (x,, + i, ■ ■ •, x n + m ) from the same source is obtained, and that from the first sample it is
ed to make a forecast on the behavior of a quasi-linearly invariant function
Z — l// ( X n + i , . . . ,X n + m)
e second sample sequence, i.e., a function with the following property:
lj)(\ + 8X„ + i, . . ., \ + 8X n + m) = k + 8\(j(X„ + i, . . .,X n + ,„).
quasi-linear invariance property is needed for subsequent application.
>Jow consider the likelihood function of (Xi, . . .,X„;Z) which, because of the quasi-linear invari-
property of Z, can be expressed in the form
1 ^ J X\ A X n A Z A
8"
Vmong the generally quasi-linearly invariant statistics Z are the (sample) average, median, largest
(maximum), smallest value (minimum), and, more generally, linear functions of the order statistics.
p he present problem is concerned with finding a quasi-linearly invariant statistic
/(*!,. • ;X n )
'. observed sample {X\, . . ., X») which approaches optimally in the mean square sense to Z. The
ion must therefore satisfy the following conditions:
f(k + SX lt . . ., k + 8X tt ) = k + 8f(X u . . .,X n ),
F2
= J ...J [/(*,,. . ., x„)-z] 2 -— -3?( ' ,..., " ;——jdxi . . . dx n dz
iimmum.
7e now consider expression (7) more in detail. On account of the quasi-linear invariance of
(A^„ + i, . . .,X n + m) by the linear transformation
Xi=k + 8X\
Z=k+8Z',
•tain
E
= 8 2 T . . . ["[/(*{,• • .,x' n )-z']*- Jfix'i x' n ;z')dx[. . . dx'„dz' = 8*E%
490 J. TIAGO DE OLIVEIRA AND S. B. LITTAUER
where £5 results from E 2 when A. = 0, 8= 1. We will return now to the use of {X, Z) even when deal,;
with reduced values.
Note that E'l and E 2 represent the same mean square error, expressed, however, in different uni
The general solution for the forecaster, as given in [20], is
l8> f- f- —
dpP" 1 ] da£'[a + Pxu...,a + Px H )[nm(a + pXi,...,a + pxn)-a]
r, \ JO J-00
f{Xl, . . ., X„) ~
J d/3 f3" I " da Se (a + px u . . ., tt + j8x«)
where
=Sf(«i, . . ., x„) = I dzJ^lxi, . . ., x n ; z)
denotes the marginal likelihood of the sample, and
fi m (xi, . . .,x„)= I dzzJ£(xu ■ ■ .,x„; z)/J? (xi, . . .,x„)
denotes the conditional mean.
The value of E 2 can be split simply by setting/— z=f— fim~ (z — fi m ), which yields:
El= I . . . I (z — (A,,,)' 2 ^ (x\, . . ., x„; z)dxi . . . dx»dz
J — 00 J — X
(10) +\ ••• [/Ut. • • •> x„)-ix m ] 2 Jf (xi, . . ., acjdxi . . .d,
J — zc J — -X
= (rln, + D 2 t Jf)
= variance of z about the conditional mean + mean square error of/ (about the conditional mean). T
function /minimizes the 2nd summand, since cr 2 >m is a constant. In the case of independence, the fii
component is, exactly, the variance of z, because fx m {x\, . . ., x n ) is a constant, as a consequence
the splitting of J>f (x\ , . . ., x„; z) as =£? (xi, . . ., x„)h(z) =g(x\ ) . . . g(x„)h(z). It was shown
[21] that, in the general case, if we denote by ip(\, 8)(— °° <A<oo,0<8<o°)
I . . . J /x m (A. + 8*i, . . ., A + 8x„) =2? (*i, . . .,x„)cfci. . . cfc„
and by u;„(A, 8)
J-oo' ' ' J-.8"if (\ + Sxi, . . .,A + 8x„)
efoi . . . dx„,
WEIBULL DISTRIBUTION FORECASTERS 491
D2 (X-y (X, 8 )+<p(0, 1)8) 2
o 2 = sup
(x.f) w„(K, 8)-28 + 8 2
ower bound for the mean-square error
0«,m(/)=| ■ • -I [/(*!»• • -. *n)— /Am(*l, • • ., X„ )] 2 J? (*i , . . .,x»)dxx . . . dx n .
J — X J — oo
Note that fi 2 is indeterminate for A = 0, 8 = 1. For the case of independence, since <p(k, 8) = (jl„, is
stant and u;«(\, 8)=w"(A., 8) with
m>(A, 8) = I o /v , ^ \ dx,
g 2 (x)
8g(\ + 8x)
>wer bound has the simpler form
„ 2 _ (A. + 8fJLm — (Jim) 2
~ fx U S! w n (K 8) - 28 + 8 2 "
\ subevaluation of the efficiency (this lower bound is not necessarily sharp and, if sharp, may
3 attained) is then given as
B 2 ID\, m (f).
Ve will apply these results here, although, owing to computational difficulties, in some cases
ill use a lower, but computable, bound B 2 . In case of regularity we can use a bound B' 2 { ^B 2 ).
>ound, analogous to the Cramer-Rao bounds for estimation, is given by
■
„,„ Soflm — 2 SifJLm +S-> ~ 1
t> ; 7. ;
n (soS-> — 5, — So)
f- a g'Hx)
S a = I X a — - dx.
J-x g(x)
hus the subevaluation of the efficiency is given by
B' 2 ID\, m {f).
ubevaluation of the efficiency will be referred to merely as the efficiency.
492 I. TIAGO DE OLIVEIRA AND S. B. LITTAUER
3. THE SHAPE PARAMETER ( k„) KNOWN IN WEIBULL DISTRIBUTION
The relations developed above will now be applied directly to the Weibull distribution when |
shape parameter is known. Suppose that we consider the quasi-linear function Z to be
(16) Z = min(X„ + i, . . .,X n+m ).
Since we are dealing with independence, owing to the stability property of the Weibull distribute,
we
have
(17) Jfixi, . . „Xn) = W'(xi; 0, 1, k„) . . . W'(x lt : 0, 1, *„)
and
(18) Jf(xr,. . ., x n \ z)=3?(xi, . . .,x n )m llK oW(m llK °'z; 0, 1, k ),
where W {x; 0, 1, k ) denotes the density of W{x: 0, 1, k ), that is, W (x; 0, 1, k„)=0 if *«
W'(x- 0, 1, K„)=Ko*%- 1 e^ K °>if x ^0.
Consequently, we have
(19) fi m = j dzm^ozW'im^oz; 0, 1, k„) = T(l + 1/ko)/wi i/k «
= /u,(/<o)/m 1/K o (m-(ko) = /li ).
As IV ' (x; 0, 1, ko) =0 if x =£ the integration takes place in the region a + fix, ^0, that is, a 3*— j8
where ^ = min (xi , . . .,ac n ).
The forecaster can then be written as/(^i, . . . , x„) where
(20) /(*„, . .,Xn.
J djS/S"" 1 J rfa( A t,„-a)[7r'(a + /3x,: 0, 1, k„)
f*dpp n ( X daf]W(a + px,;Q,l,K )
Jo J =vs f = ,
and. taking tj = a + /3A
f'dffl''-* r^( M) »-i,)ffr(r, + /3(xrO:0, 1,
Jo Jo f = .
K(),
(20') /(*,,-. . .,x„)=t?+-
j"^ rf/3/3" J^ t/77 fj fiT' (t, + )8
(*j-/); 0, 1, ko)
Computation facilities have no utility here because they would imply the computation of the forecast
/for any n (sample size) and for any (practically) observable sample mean.
WEIBULL DISTRIBUTION FORECASTERS 493
In the case of the exponential distribution (/co= 1), we have
— e- x if X 2= 0, ix m = l/m,
we obtain
\m n J
For other values of ko the computation is cumbersome; as an example consider the Rayleigh dis-
ution where Ko = 2. Let us now determine a lower bound for efficiency computation. For B 2 we must
pute w(k, 8). In many cases the supremum is attained at the indeterminacy point (\, 8) = (0, 1).
sequently, to facilitate effective computability, we will take A. = 0, so that
u;(0, S) = l/[8 K o(2-S K o)] for0<8%<2,
eg(X) = W'(X;0,l,K ).
From this we compute the lower bound
B" it Ml(«-i)"
P%
H-U
d — urn —
8->l 1
Of 1 V9
1 + hkI
UK
8%"(2-8%)" " u "
evident that B 2 s= B 2 . The evaluation of the asymptotic efficiency is then given as
enKK+J) urn Klm**o]imnD\, m (f)
n— * oo
. proposed forecaster /.
For Ko=l, B 2 is asymptotically the value of D 2 n , m {f) for the best forecaster, already given.
Since
5a =[(K -l) 2 + a(Ko + a-2)]r('l + ^^V
>ound B' 2 , for ko > 2, is
B , 2 _ «g-2/<o(Ko-l)r(l-l//<o)/x m +(Ko-l) 2 r(l-2/Ko)/x 2 w
nK 2 (K -i) 2 [r(i-2/Ko)-r 2 (i-i/Ko)]
immediately evident that, asymptotically, B 2 =S B' 2 .
494 J. TIAGO DE OLIVEIRA AND S. B. LITTAUER
4. A STUDY OF SOME FORECASTERS FOR KNOWN SHAPE PARAMETER (k <2)
It seems desirable to explore different forecasters for different ranges of Ko because of comput
tional difficulties. We will therefore examine the region k ^2, with a hint from the result for k = 1
The special simple form of the forecaster for the exponential distribution may suggest the use i
the function
(26) f + a{K ){x-f)
as a forecaster, where a(/<e) is determined so as to minimize the mean square error
Dl, m = • • • [f + a{Ka)(x — f) — fjL m ] 2 JC(x u . . ., x„)dxi . . . dx n .
J — 30 J —00
This suggestion is reinforced by the fact that the lower limit A. has € as a sufficient statistic (ove
estimator).
The sample average X is a quasi-linear statistic converging to p-(ko) = T 2 (l + 1/ko), with varianc
[r(l + 2lK )-rm + llKo)]ln = b( Ko )ln.
€ has the expected value /jl,, = p,(K )ln 1,K o, its variance being
r(i + 2/K )-r 2 (i + i/Ko)
n 2lK o
-=MK )/n 2/ *o;
the correlation coefficient p„ — p„{x, f) converges to zero (see Tiago de Oliveira [19] and Rosengaai
[15]).
Thus, the mean-square error Z) 2 „,,„, as a function of a, can be written as E[(f + a(x — f) — fi m )' 2 ]
E[(f-iA m ) 2 ]+2E[(f-tx m )(x-f)]a + E[(x-f) 2 ]a 2 where E denotes expected value. Using th
decomposition f — fi m = f — fx,, + fin — \tm and x— f = x — fii + (ii — fin + p. n — /, we have
E[(if- fim y]=^+( t i n -f lm r
rr/ , w _ ^n b(K ) b(Ko)
h[{S - (l m ){x-f) \=p n n il KQ + 1 J 2 ^7^" + ^'' _ ^"»HAIi-^»)
b(Ko) , , Xo , b(Ko) b(Ko )
„2/k Z P» n l/,c H
E[(x-/') 2 ]=—^+(fi 1 -fx n ) 2 +— w —-2pn n n K0+m .
The minimum of the quadratic is attained at some value whose limit is
(27) a = p m l^ = m- x l K %
WEIBULL DISTRIBUTION FORECASTERS 495
nee the forecaster becomes: f = f+m- llK o(x — /'). The asymptotic values of the mean square
r are then:
fijb(Ko) _ b( KQ )
"• m fji^n nm 2 l K o
m
■2 Vm + 2-7r/4
K0 = 2
Ko>2.
r ( i+ ^K'-^>
sequently, the asymptotic efficiency is, since B 2 -,
f v [/*(*o)] 2 _
e m \Ko)= — TO — ^~ Ko < 2
KqO{Ko)
it
K = 2
16(m-2 V^+2-tt/4)
= Ko>2.
: Zero efficiency for ko > 2 results from the slow convergency of / to the parameter k, an effect
in this region overcomes the more rapid convergence of the average X. The efficiency for ko<2
dependent of m. A short table of the efficiency for the above forecaster is (for m— 1, only):
ko 1/3 1/2 1 2 •
e(K ) 0.5 0.8 1 0.91
i forecaster for 1/2 < ko < 2 is reasonably efficient. Outside this range for the shape parameter,
(ier forecaster must be found. A full set of values of this efficiency is given in Table 1 in the
endix.
Let us now consider the use of a sample quantile q and the smallest value € . It is apparent (by
; rgument given above), that, since the variance of a quantile is of order \\n, a linear combination
f + a(q-f)
3e efficient only in the region k<> =£ 2; it will be useful in the region ko < 1/2. For k = ko known, let
«note by v the quantile of probability o)—W{v\ 0, 1, Ko). It is well known (Cramer [2]) that the
496 J. TIAGO DE OLIVEIRA AND S. B. LITTAUER
sample quantile q for probability <w is asymptotically normal with expected value v and asympto
variance of the order of <o(l —(o)lnW 2 (v; 0, 1, ko).
Since in the present case we have
W(x; 0, 1, K ) = l-e-* K °, x^O
and
W (x; 0, 1, ko) = Kox'o^e-*" , x^O,
the asymptotic variance is of the order of
e<"* >-l
The mean square error is
E[(f + a(q-f)- f jL m ) 2 ]=E[(f-v m ) 2 ]+2E[(/-n m )(q-f)]a + E[(q-n 2 ]a 2 ,
with
£[(/-Jll m ) 2 ]= 2/ho + (fXn — fJLm) 2
rrt* w *\i b(Ko) 112 b(Kp) , ,
E\.(f — IMm)(q — f)\=p n Tn 1/k() n2/K0 + ( /I* ~ /*« ) ( ^ - /""< )
6(/c ) „ 6(ko)
1/2
£[( 9 _/)2] =T 2+ ( „ w _ /ln )2+_L^._2p IlT| ,
„2/K <*"»'" „1/K >
where we have made a decomposition analogous to the previous one and where v n , t 2 , and p„ deno
the exact expected value and the variance of the sample quantile and the correlation coefficient betwei
the sample quantile and the minimum. As might be expected, p„— »0 (Rosengaard [15]).*
Denoting by v the limiting value of p„, we see that the limiting value of a that minimizes the mea
square error is
a = ,
v
"In Cramer [2] it is shown that (v„ — v a ) =o(n 1 < 2 ).
WEIBULL DISTRIBUTION FORECASTERS 497
the asymptotic mean-square error for ko <2 is
7)2 A4(e" "I)
For ko — 2 we have
^ w ~ 1 { 7r(e "" 1) + (l— —)
"•'" m [ 16t/ 4 \ 2Vmv'
We will only consider the problem for ko < 2. The case for ko = 2 could be dealt with numerically.
: minimum mean-square error for variation of v is attained for v K » — 1.593624, and we have
roximately
fil / e'^624_ 1 \ ^ 1.5441386^
n ' m ukI 1(1.593624)7 ukI
hat the asymptotic efficiency is
1
e "' Uo) -1.5441386 "°- 65 -
Taking the approximation j/ K o=1.60, the constant efficiency is c zLzl1 i^ = 0-65 also. The forecaster
be
/+/x(Ko)(1.60m)-^o( g -/)
e q is then the sample quantile for probability co = 1 — e"" x °= 1 — e _1 - 60 = 0.796. This technique,
>e asymptotic efficiency is independent of m and ko, is to be used for values of Ko ^ 0.40 where the
iency of the forecaster <? + a(x— f) is smaller than 0.65. For ko = 2 a better forecaster will be
idered.
THE MAXIMUM LIKELIHOOD FORECASTER FOR THE SHAPE PARAMETER
>2) KNOWN
The maximum likelihood estimators of A. and 8 for ko > 2 known are given by the equations
2(*.--M* o -' _ tto-i -
2U-X)«o -- nK0 zw ^
498 I. TIAGO DK OUVEIRA AND S. B. UTTAUER
so that the forecaster of \ + fi m 8 to be used must be k + /x„,8. Since, for \ = 0, 8 = 1, the variances ai
covariances are asymptotically
«A' v ' nA
>,,<• ', — kq(kq — l)r(l — 1/kq)
C(X, 8) ^ ,
A=K§(Ko-i) 2 [r(i-2/Ko)-r 2 (i-i/K )],
the mean square error of the forecaster A.+ /x m 8 is
- - Kg-2Ko(Ko-l)r(l-l/Ko) A t„,+ (KQ-l) 2 r(l-2/Ko) i Lt 2 >i
*V4A + /W» ~ nK 2 (Ko _ 1)2[r(1 _ 2 / Ko )-p(l-l/Ko)]
This is precisely equal to the asymptotic value of (B') 2 , whence the efficiency is 1. This techniqi
requires the use of computers for determining A and 8. Since, however, it is desirable to have simplt
and more practical techniques where computer facilities are not available, we consider in the ne:
section some all range forecasters in this category. Such forecasters can be quite useful since tl
range ko =£ 2 is most important for practical application.
6. SOME ALL-RANGE FORECASTERS
The difficulties found above in respect to computability and efficiency of some proposed forecastei
may suggest the consideration of the traditional approach, using quantiles and moments, since the
variances are of order n~ l at the lower bounds.
For example, consider the quantile q for the probability (o = W{fjL m ; 0, 1, ko), whose asymptoti
efficiency for the bound B 2 is given as
,„. , ,_ /C° [m(*o)] 2 *o
(33) e m {Ko)— — = — -,
eC-1 m 2 [e»°^m-l]
i
which is approximated by /x^°lm for large m. For m— 1 (one-step forecast) we have the following tab!
of efficiencies:
ko 0.25 0.5 1 2
c(ko) 0.60 0.64 0.58 0.52
This short table shows that the quantile W(ix m \ 0, 1, Ko) whose efficiency fades out with m, is no
useful in the range Ko ^ 2. Values of this efficiency are given in Table 2 of the Appendix.
For ko > 2 the efficiency is given, using the bound B' 2 , by
, 9/l v , ,_ ^ K o- 1) (K 2 -2Ko(KQ-l)r(l-llKo)fl m +(K -l) 2 r(l-2lK ) t l 2 J
( e %°-i)(Ko-i) 2 [r(i-2/K )-r 2 (i-i/Ko)]
WEIBULL DISTRIBUTION FORECASTERS 499
1 jU„, = /u.(Ko)/ttJ 1/h0:= r(l + \JK{))lm ilk °. The short tahle, for m= 1 is:
ko 2.5 3.0 3.5 4.0 4.5 5.0
e,(K„) .53 .55 .57 .58 .59 .60
hows that the quantile forecaster has an efficiency of about 0.6. A full table of the efficiency for
>2 is given in Table 3.
Consider now the use of a forecaster based on the very classical method of moments,
) x + as
re 5 denotes the standard deviation of the original sample. The mean square error is D 2 =
x + as — fjL,,,) 2 ] = E[(x — fXm) 2 ] +2E[(x — fx m )s]a + E{s 2 )a' z . The covariance between *and 5, C„,
jproximated by [/3, (/co)6(/co)]/2n (Gumbel and Carlson, [5]), where /3i(*o) denotes the skewness
ficient. With
/3 2 (k»)+3
o-h(ko) ~ V6(ko) II I
expected value of s, where /3>(kq) denotes the kurtosis coefficient ( the fourth central moment over
square of the variance), we have
D n,m = ~fT — h (Ati -)U, W ) 2 + 2[C„+ (aii -/x,„)o-„(Ko)]a + 6(Ko)a 2 .
The value at which the quadratic attains the minimum converges to
_ fX\ —/Am
at the asymptotic value of the mean-square error is
n> 6Uo) f /Si(K )(Ati-At MI ) [/8 2 («co)+3](At 1 -jn,„) 2 l
n l 1 VbW> + 46(k„) J
(Gumbel, [4]):
/3,(Ko)=[r(l + 3/Ko)-3r(l+2/Ko)r(l + l/Ko) + 2r 3 (l + l/Ko)]//> 3 / 2 (K„)
o) = [r(i +4/k ) -4r(i +3/K )r(i + i/ Ko ) +6r(i + 2/«o)r 2 (i + i/k ) -3r 4 (i+ i/k )]/6 2 (k ).
500 .). TIAGO DE OLIVEIRA AND S. B. LITTAUER
The forecaster is then
< 37 > s -{lM^) s -
Before continuing, a short proof of the asymptotic value of cr« will be given. In Cramer [2]
have the following relation,
Q3 2 (ko)-1)Mko) , _, n
E(s-<t„) 2 = ^ + 0(n- 2 ).
Developing the square and taking expected values, we have
n-1 9 _ (/8 2 (kq)-1)6(ico)
6(#co)-o-* = 3 \-0{n 2 )
n 4n
from which the desired result follows.
The asymptotic efficiency with respect to B 2 is
(38) e,„(/c )
r^m
,,/ J, /3i(kq)(/Xi— )Lt m ) (j8 2 (kq) +3) (/Ai — /in,) 2 V
K ° 6(Ko) r v^y + 4^ I
The asymptotic efficiency for ko > 2, with respect to fl" 2 , is given by
(39)
K 2 -2K (Ko-l)r(l-l/Ko)/A m +(Ko-l)T(l-2/Ko)M
e,n(Ko) =
K 2 (K0-l) 2 b( K0 )
i — -)-n/i-—
Ko / V Ko
/3i(Ko)(/i,i -/Xm) , (fi>(Ko) +3)(lXi— fl m ) 2
Vb{^o) 46(k )
A short table for the first efficiency for m= 1(/jl\ = fjL m ) is:
ko 0.5 1.0 2.0
e,M 0.80 1.00 0.91
A table of the first efficiency for ko =£ 2 is given in Table 4 of the Appendix; the second efficieru
is given in Table 5 for k > 2. The second evaluation is closer to the true efficiency since B 2 =£ B'
It readily may be shown that for the second efficiency at the limit ko 4 2, the value obtained is equ
to that for the first efficiency at Ko — 2.
7. THE CASE FOR THE GUMBEL DISTRIBUTION
Let us consider now the Gumbel distribution, but for the sake of commodity, we will consider I
maxima distribution exp (— e~ Y ). The minima are dealt with by symmetry. Computation of the foi
WEIBULL DISTRIBUTION FORECASTERS
terf(Y\, . . ., Y„) by the formula given in (8) gives
501
f(Y u . ■ ;Y„) =
j'dpp*-
Jo
-pzy;
CZe-^i)"
y+ln m +
r'(n)
InSe-^i
1^ ^
(Se-^0"
e formula is as uncomputable as the one given before. Since we will investigate other forecasters,
will need the lower bound. This one can be computed by the formula for B' 2 ; its value is
B' 2 ~
1+— (1+ln m) 2
77
In.
m=\ (one-step forecasting) its value is 1.61. ..n, as given in [21]; the value of fi,,, is y + ln m = 0.577
i m. As it is not natural to use here the smallest observed value, we will only consider as forecasters
quantile for the point (jl,,, and the linear combination y-\-as.
The various computations follow as before (using the same formulae) so that we will only give the
ilts.
Quantile forecaster: its asymptotic mean square error is Df, m ~ e 2lL m{e € '"— l)/rc so that the
,iency is
<? »! "
1 + — (1 + lnm) 2
e2y(e'- y " n -l)m*
efficiency for m= 1 being e\ =0.67. Table 6 in the Appendix gives the values of this efficiency.
Moments forecaster: its asymptotic mean square error, where /3i = 1.14, /3> = 5.4, is
D z .^{w ft' lnm [ (fr + 3)(lnm) 2
at the efficiency is
u
1+— (1+ln m) 2
1 | jS.ln w | (fr, + 3)(lnm) 2
V^/6
7T 2 /6
The efficiency for m= 1 is e\ = 0.98, the forecaster being
, V61n m , In m
502 I. T1AGO DE OLIVEIRA AND S. B. LITTAUER
Table 7 of the Appendix gives the values of these efficiencies.
Evidently the maximum likelihood forecaster (of the form \ + fi, n 8, see Gumbel [4]) has efficiei
one. In case the maximum likelihood forecaster is not used the quantile forecaster is preferable to
moments forecaster for m > 1.
8. THE LOCATION PARAMETER (A„) KNOWN IN THE WEIBULL DISTRIBUTION
In that case let us suppose now that A. has a value A known; for simplicity take \ = 0. Take tr
Yi = — InXj, W = —\x\Z. Their distribution is obviously, as said before, the Gumbel largest value c
tribution with parameters £ = — ln8, t= 1/k. We may remark that if Z is any order statistic of (X„
. . .,X n+m ) then W = — InZ is the complementary order statistic of (— lnZ„ + i, . . ., — \nX n+m ) owl
to the monotonic decreasing character of the transformation.
Suppose now that for (Y\, . . . ,Y„; W) we examine the forecaster /(Fi, . . . , Y n ), verifying a
ditions (6) and (7); that is,
M + tY u . . .,£ + tF„) = £ + 7-/(7,, . . .,Y n )
and
j • • • j [w-fiyi y„)] 2 ^(y,, . . . , y„; w)dyi . . . dy»dw,
a minimum, Jz?(yi, . . ., y«; w) denoting the likelihood function of (yi, . . . , n ; w) as transform!
from the likelihood J?(xi, . . . , x n \ z). Now putting/(— \r\X\, . . . , — lnZ„) =— lng(A r 1 , . . .,X,
those relations are transformed to
gidXl . . .,8X K n ) = 8g«(X u . . .,Xn)
and
I . . . I (in — ' z J J?(Xi, . . .,X„; z)dX u . . ., dX„dz,
a minimum.
That shows that the technique of forecasting in the Gumbel distribution as applied to F, and
does not give, effectively, the least squares forecaster of Z, but, say, the logarithmic least squai*
forecaster of Z.
9. EXAMPLE
Consider the forecaster €' — € + m~ llK °(x — f) which derives from equations (26) and (27), wh<;
in W{X: X, 8, k), A is set equal to zero, 8 = 1, and k — ko, < k =£ 2, whence W{X:0, 1, Ko) = 1~ e ' 1 '
In the forecaster equation, €' is the forecast minimum from n observations on random variah
X; Y is the linear average of these n random values of X; £ is the minimum of W{X:Q, 1, Ko) result)?
WEIBULL DISTRIBUTION FORECASTERS
503
the first n values of X; m subsequent values of W( X: 0, 1 , ko) are obtained on a new set of m ran-
values of X; x,„ is the linear average of these m values. The forecast minimum f' is compared
the m observed values of W(X: 0, 1, ko).
rhe results are given below: For
ko = 1.2: n = 50, m = 20, 40
ko = 1.2: n=100, to = 50
ko = 1.2: n = 500, m = 50, 100, 200, 300, 400
Ko = 1.5: n = 50, m = 20, 40
ko = 1.5: n=100, m = 50
ko = 1.5: n = 500, m = 50, 100, 200, 300, 400
ko = 2.0: n = 50, m = 20, 40
ko = 2.0: n = 100, m = 50
ko = 2.0: n = 500, m = 100, 200, 300, 400.
wo conclusions are apparent from these results. For each value of ko, the number among the
jre observations which are smaller than the forecast minimum £' decreases as n increases, as
be expected. And the forecaster is more efficient for smaller values of ko, when n is large. The
ncy is much poorer for
K0 = 2.0
. It seems that
an effective range for the combination (
Ko. n) is
found empirically.
he results contained below were obtained by Monte Carlo simulation using an H.F^ 9810 pro-
lable calculator. The values f[, ^" 2 , .
. ., denote the observed smallest values among the
\ sample of m. which are smaller than /
i
^e wish to express our appreciation to F
'rofessor Tuncel M. Yegulalp for his assistance in pro-
the computations for determining the following
data:
n
in
x„
Xjn
i
f
fi
ft
ft
I ,'
t .-,'
t ,."
50
20
1.106482
0.618368
0.087202
0.171168
0.013614
0.067171
0.090015
0.170916
50
40
0.999691
0.864131
0.037770
0.082242
0.012288
0.026883
0.027323
0.057124
100
50
0.993607
0.870766
0.058743
0.094630
0.009625
0.075735
0.089076
500
50
0.955362
1.053420
0.017990
0.053974
& r
500
100
0.986448
0.811142
0.005847
0.026973
0.009693
0.021219
500
200
0.948326
0.995920
0.009625
0.020975
& f
500
300
0.995920
0.995920
0.009625
0.016349
3= f
500
400
0.938055
0.927183
0.000687
0.007048
0.004685
50
20
0.802585
1.157086
0.041248
0.097344
== r
50
40
0.864939
0.919777
0.091047
0.157814
0.020942
100
50
0.952770
0.885156
0.080250
0.144538
0.041757
500
50
0.936661
0.987915
0.032320
0.098952
0.083899
500
100
0.902707
0.901279
0.024205
0.065282
0.044599
0.059561
500
200
0.931957
0.893814
0.009254
0.036226
0.010214
500
300
0.906875
0.894213
0.003972
0.024120
5= t '
500
400
0.948098
0.958196
0.004700
0.022078
0.001939
0,018320
50
20
0.936665
0.870119
0.254734
0.407218
0.138128
0.288591
0.386233
50
40
0.818688
0.914252
0.089966
0.205187
0.018976
0.090137
0.102450
100
50
0.822177
0.981068
0.100348
0.202430
■» /'
500
100
0.895189
0.837045
0.056219
0.140116
0.021859
0.023695
0.056533
0.063959
0.099370
0.118113
500
200
0.918345
0.862408
0.076011
0.135573
0.010172
0.045231
0.054819
0.064132
0.093357
0.097550
500
300
0.901596
0.894982
0.066431
0.114653
0.065947
0.076226
0.085539
0.095738
0,099727
500
400
0.897788
0.884023
0.047098
0.089632
0.031186
0.046646
0.055452
0.076625
0.086074
,
504 I. T1AGO DE OLIVEIRA AND S. B. LITTAUER
REFERENCES
[1] Chew, Victor, "Simultaneous Prediction Intervals," Technometrics 10, 323-31 (1968).
[2] Cramer, H., Mathematical Methods of Statistics ( Princeton University Press, 1946).
[3] David, H. A., Order Statistics (J ohn Wiley & Sons, Inc., New York, 1970).
[4] Gumbel, E. J., Statistics of Extremes (Columbia University Press, 1958).
[5] Gumbel, E. J. and P. G. Carlson, "On the Asymptotic Covariance of the Sample Mean and St i
ard Deviation," Metron XVIII (1956).
[6] Hahn, C. J., "Factors for Calculating Two-Sided Prediction Intervals for Samples From a No i
Distribution," Journal of the American Statistical Association 64, 878-88 (1969).
[7] Hahn, D. J., "Additional Factors for Calculating Prediction Intervals for Samples From a No'i
Population," Journal of the American Statistical Association 65, 1668-76 (1970).
[8] Harter, H. L., Order Statistics and Their Use in Testing and Estimation, Vol. 2, U.S. Governr
Printing Office (Washington, D.C. 1969).
[9] Johns, M. V. and G. J. Lieberman, "An Exact Asymptotically Efficient Confidence Bounc'
Reliability in the Case of Weibull Distributions," Technometrics 8, (1966).
[10] Lieberman, G. J., "Prediction Regions for Several Predictions From a Single Regression Li
Technometrics 3, 409-17 (1961).
[11] Mann, Nancy R., "Warranty Periods Based on Three Ordered Sample Observations fro
Weibull Population," I.E.E.E. Transactions on Reliability R-19, 167-71 (1970).
[12] Mann, Nancy R., and Frank E. Grubbs, "Chi-Square Approximations for Exponential Paramer
Prediction Intervals and Beta Percentiles," Journal of the American Statistical Associali
69,654-661(1974).
[13] Mann, Nancy R., and Sam C. Saunders, "On Evaluation of Warranty Assurance When Life :
a Weibull Distribution," Biometrika 56, 615-25 (1969).
[14] Mann, Nancy R., Ray E. Schafer, and Nozer D. Singpurwalla, Methods for Statistical Anab'.
of Reliability and Life Data (John Wiley & Sons, Inc., New York, 1974).
[15] Rosengaard. A., Contributions a l'Etude des Liaisons Litites entre Differents Statistique df
Echantillon, These, Institutede Statistique, Univ. Paris (1966).
[16] Sarhan, Ahmed E. and Bernard G. Greenberg, Contributions to Order Statistics (John Wiley & Si
Inc., New York, 1962).
[17] Saunders, Sam C, "On the Determination of a Safe Life For Distribution Classified by Fail
Rate," Technometrics 10, 361-77 (1968).
[18] Teichroew, D., "Tables of Expected Values of Order Statistics from Samples of Size 20 and f
From the Normal Distribution." Annals of Mathematical Statistics 27, 410-26 (1956).
[19] Tiago de Oliveira, J., "The Asymptotic Independence of the Mean and the Extreme Valu,
Univ. Lisboa. Rev. Fac. Ciencias 2(A), 8 (1962).
[20] Tiago de Oliveira, J., "Quasi-Linearly Invariant Prediction," Annals of Mathematical Statists
.37(1966).
[21] Tiago de Oliveira, J., "Efficiency Evaluations for Quasi-Linearly Invariant Predictors," Re\?
Beige de Statistique et Recherche Operationelle 9 (1968).
[22] Wilks, S., "Determination of Sample Sizes for Setting Tolerance Limits," Annals of Mathemata
Statistics 12 (1941).
WEIBULL DISTRIBUTION FORECASTERS
505
Wilks, S., "Statistical Prediction With Special Reference to the Problem of Tolerance Limits,'
Annals of Mathematical Statistics 13, (1942).
APPENDIX
TABLES OF EFFICIENCIES
TABLE 1. Computations based on
e m (»co) = [juUiOP/ko 2 b(K )
Ko
0.10000
0.20000
0.30000
0.40000
0.50000
0.60000
0.70000
0.80000
0.90000
1.00000
e,„K
0.00054
0.09960
0.37996
0.63355
0.80000
0.89873
0.95424
0.98339
0.99655
1.00000
Ko
1.10000
1.20000
1.30000
1.40000
1.50000
1.60000
1.70000
1.80000
1.90000
2.00000
CmKo
0.99753
0.99149
0.98334
0.97401
0.96409
0.95396
0.94385
0.93393
0.92428
0.91495
Table 2. Computations based on
e m (Ko) = [/*(/«,) ] 2 '°/m 2 [«*«•(«¥■ - 1]
Ko
0.10000
0.20000
0.30000
0.40000
0.50000
0.60000
0.70000
0.80000
0.90000
1.00000
m
1
0.22380
0.54150
0.63075
0.64753
0.64242
0.63072
0.61755
0.60471
0.59280
0.58198
2
0.59447
0.63339
0.57570
0.52518
0.48633
0.45638
0.43282
0.41387
0.39833
0.38537
3
0.64651
0.54524
0.46145
0.40667
0.36899
0.34164
0.32091
0.30467
0.29160
0.28086
4
0.60970
0.46206
0.37826
0.32798
0.29473
0.27113
0.25352
0.23986
0.22896
0.22005
5
0.55665
0.39703
0.31886
0.27388
0.24473
0.22429
0.20915
0.19749
0.18822
0.18067
6
0.50543
0.34673
0.27502
0.23478
0.20901
0.19109
0.17788
0.16774
0.15970
0.15316
7
0.46009
0.30720
0.24155
0.20531
0.18231
0.16638
0.15469
0.14573
0.13865
0.13290
8
0.42090
0.27550
0.21523
0.18236
0.16161
0.14730
0.13683
0.12881
0.12248
0.11735
9
0.38716
0.24959
0.19402
0.16398
0.14511
0.13213
0.12265
0.11540
0.10968
0.10505
10
0.35804
0.22805
0.17659
0.14895
0.13166
0.11979
0.11112
0.10451
0.09930
0.09508
15
0.25863
0.15903
0.12172
0.10208
0.08991
0.08161
0.07558
0.07099
0.06738
0.06447
20
0.20177
0.12196
0.09282
0.07761
0.06824
0.06187
0.05725
0.05374
0.05098
0.04876
25
0.16524
0.09887
0.07499
0.06260
0.05488
0.04982
0.04607
0.04323
0.04100
0.03921
30
0.13985
0.08312
0.06290
0.05245
0.04604
0.04169
0.03855
0.03616
0.03429
0.03278
35
0.12120
0.07170
0.05417
0.04513
0.03959
0.03585
0.03313
0.03108
0.02946
0.02817
, 40
0.10693
0.06303
0.04757
0.03961
0.03473
0.03144
0.02905
0.02725
0.02583
0.02469
45
0.09566
0.05623
0.04240
0.03529
0.03094
0.02799
0.02587
0.02426
0.02299
0.02198
50
0.08653
0.05076
0.03824
0.03181
0.02789
0.02523
0.02331
0.02186
0.02072
0.01980
60
0.07267
0.04248
0.03197
0.02658
0.02329
0.02107
0.01946
0.01825
0.01730
0.01653
70
0.06263
0.03653
0.02747
0.02283
0.02000
0.01809
0.01671
0.01566
0.01484
0.01418
80
0.05502
0.03204
0.02408
0.02001
0.01752
0.01585
0.01463
0.01372
0.01300
0.01242
90
0.04906
0.02853
0.02143
0.01780
0.01559
0.01410
0.01302
0.01220
0,01156
0.01105
100
0.04427
0.02571
0.01931
0.01604
0.01404
0.01270
0.01172
0.01099
0.01041
0.00995
506
I. TIAGO DE OLIVEIRA AND S. B. LITTAUER
Table 2. Computations based on (Cont.)
e,
,(*.,) = [m
k „)!-'" ■/ m !
[e"*" u '"""
-1]
K<i
1.10000
1.20000
1.30000
1.40000
1.50000
1.60000
1.70000
1.80000
1.90000
2.00000
m
1
0.57221
0.56341
0.55548
0.54832
0.54183
0.53593
0.53055
0.52562
0.52110
0.51694
2
0.37441
0.36501
0.35686
0.34974
0.34345
0.33787
0.33288
0.32838
0.32432
0.32063
3
0.27187
0.26424
0.25768
0.25198
0.24698
0.24256
0.23862
0.23509
0.23191
0.22902
4
0.21264
0.20637
0.20099
0.19634
.0.19226
0.18867
0.18547
0.18261
0.18003
0.17770
5
0.17440
0.16911
0.16459
0.16067
0.15725
0.15424
0.15156
0.14917
0.14701
0.14507
6
0.14775
0.14319
0.13929
0.13592
0.13298
0.13039
0.12809
0.12604
0.12419
0.12252
7
0.12814
0.12413
0.12071
0.11776
0.11518
0.11291
0.11090
0.10910
0.10748
0.10602
8
0.11311
0.10954
0.10649
0.10386
0.10157
0.09955
0.09776
0.09617
0.09473
0.09343
9
0.10123
0.09801
0.09526
0.09290
0.09083
0.08902
0.08741
0.08597 *
0.08468
0.08351
10
0.09160
0.08867
0.08618
0.08402
0.08215
0.08050
0.07903
0.07773
0.07655
0.07550
15
0.06207
0.06005
0.05833
0.05685
0.05556
0.05443
0.05343
0.05253
0.05173
0.05100
20
0.04693
0.04539
0.04408
0.04295
0.04197
0.04111
0.04035
0.03967
0.03906
0.03850
25
0.03772
0.03648
0.03543
0.03452
0.03372
0.03303
0.03241
0.03186
0.03137
0.03093
30
0.03154
0.03050
0.02961
0.02885
0.02818
0.02760
0.02708
0.02662
0.02621
0.02584
35
0.02710
0.02620
0.02544
0.02478
0.02421
0.02371
0.02326
0.02287
0.02251
0.02219
40
0.02375
0.02296
0.02229
0.02172
0.02121
0.02077
0.02038
0.02004
0.01972
0.01944
45
0.02114
0.02044
0.01984
0.01933
0.01888
0.01849
0.01814
0.01783
0.01755
0.01730
50
0.01904
0.01841
0.01787
0.01741
0.01701
0.01665
0.01634
0.01606
0.01581
0.01559
60
0.01590
0.01537
0.01492
0.01453
0.01419
0.01390
0.01364
0.01340
0.01319
0.01300
70
0.01364
0.01319
0.01280
0.01247
0.01218
0.01192
0.01170
0.01150
0.01132
0.01116
80
0.01195
0.01155
0.01121
0.01092
0.01066
0.01044
0.01024
0.01007
0.00991
0.00977
90
0.01063
0.01027
0.00997
0.00971
0.00948
0.00929
0.00911
0.00896
0.00881
0.00869
100
0.00957
0.00925
0.00898
0.00874
0.00854
0.00836
0.00820
0.00806
0.00794
0.00782
WEIBULL DISTRIBUTION FORECASTERS
507
c
o
o
u
-n
V
+
^~.
CD
-Q
Jl
cm
t/3
*
e
o
•a i
3 U
e -
O I
I 7
£ 2
8
CO
__
NO
CM
m
CO
On
r--
CM
m
CO
CM
ON
©
'n?
©
p-
X
„
p-
p-
m
on
CO
tr.
©
CO
-t
m
-f
X
a
i.C
10
©
-+
m
CO
lO
—
CO
m
m
o
no
in
NC
Cl
©
©
Cl
m
©
lO
m
CO
m
©
p-
in
X
-c
Cl
CM
o
CO
in
CO
CC
nO
-+
Cl
©
00
CM
r-
-t-
ON
p-
©
->■
CM
©
©
BO
r~
o
NO
NO
NO
NO
m
in
m
m
lO
-*■
-+
CO
ce
CO
CM
CM
CM
CM
CM
CM
1—1
r— <
i — i
d
©
d
d
3
©
©
©
©
©
©
©
©
©
©
©
©
©
d
©
©
©
©
©
©
nO
o>
X
©
,_
CM
m
©
f~
,
O
in
CM
-+
CO
t—
X
ON
ON
-1-
On
CM
00
©
m
©
9
m
©
©
^+
-*
o
Cl
nC
NO
©
CO
CO
©
1 -
X
m
©
<N
[--
-+
CC
nO
m
•c
X
CM
X
-f
©
N*
Cl
-1-
©
©
-t-
1 -
-1
©
©
o
CO
-+
CO
CO
vO
-+
Cl
©
ON
CM
X
10
CM
©
X
©
lO
CO
©
©
X
©
o
nC
NO
NO
IO
m
in
m
m
-f
-+
CO
CO
CO
CO
Cl
Cl
Cl
CM
CM
CM
ON
©
d
©
d
©
d
d
©
©
©
d
©
d
©
©
©
d
d
©
©
©
©
©
©
3
p-
©
^ H
-r
CM
©
m
m
*o
CO
m
X
nO
, 1
X
©
©
CO
in
©
©
p-
©
©
nC
©
-r
o>
©
S
Cl
LO
X
©
X
c
Cl
Cl
©
i-
p-
P-
in
©
©
r-
CNl
CM
c-l
©
©
©
CM
iO
©
X
10
m
-+
©
©
p-
10
X
cc
©
CM
-*■
CO
CN
r~
m
CO
©
CO'
o
nO
CO
©
X
©
-+
CM
o
©
©
vo
NO
NO
NO
lO
in
m
LO
m
lO
-t-
CO'
CC
CO
CO.
CM
Cl
Cl
CM
C 1
Cl
C 1
CO
©
©
d
©
©
d
d
d
©
©
©
©
©
©
©
©
d
d
©
©
©
d
d
©
CM
oc
CO
©
X
CO
X
©
CM
X
CM
,
■*
m
•<*
^^
CO
in
©
©
CM
©
,_,
©
ON
CO
©
r—
ON
o-
CJ
o
X
LO
CO
CO
©
p-
©
X
Cl
r^
©
Ov
m
CO
Cl
CO
w
ON
CO
o-
©
X
NC
©
©
©
Cl
©
Cl
X
m
m
o
re
CJ
O-
I~-
m
TT.
CM
©
iO
©
p-
m
Cl
©
X
©
<+
CM
©
©
vO
NO
NO
NO
LO
m
in
m
m
IO
-t-
-f"
ce
CO'
CO
en
CM
Cl
Cl
Cl
Cl
CM
CM
p-^
©
d
©
d
d
©
©
©
©
©
©
©
©
©
©
©
©
©
©
d
d
d
©
©
oo
-+
O
NO
o-
•c
CM
r~
©
Cl
m
CO
CO
p-
©
©
,
^^
'©'
©
-t-
en
■<*
©
CO
o
X
m
X
X
CM
CO
-*
CC
-*■
m
10
10
©
-+
CO
©
en
r-
X
©
©
-t
©
co
CM
sC
nC
CM
o
m
m
--
On
X
©
CO
CC
X
m
T*
©
ci
CM
©
ON
t^
NO
-t
CO
NO
CM
On
O
-r
CC
©
X
©
-f
CO'
CM
©
NO
©
NO
nC
lO
m
m
m
m
IO
-+
<*
CO
CO
CO
CO
CO
CO
CM
Cl
CM
Cl
CM
vd
©
©
©
d
©
©
©
©
d
d
d
©
©
©
©
©
©
©
©
©■
©
©
©
©
a
nC
p-
ON
^
©
rt
On
LO
X
CM
©
ON
-+
LO
P-
P-
©
r^
CM
•*
©
NO
©
©
re
CJ
CO
©
On
NO
m
©
©
-1-
nO
On
-f
p-
CJ
IO
©
CO
m
X
X
©
p-
m
^c
co
X
m
CO'
©
ON
CM
<£i
X
-f
LO
X
«*
©
CM
p-
N*
CO
©
ON
o
©
ON
X
t~-
NO
m
-+
Cl
X
-+
ON
P-
LO
-t
CO
©
©
p-
©
m
©
m
NO
NO
m
lO
m
m
in
m
m
-+
-f
-f-
CO
CO
CO
CO
co
CO'
CM
CNj
CJ
CM
in
©
©
d
d
©
©
d
©
©
©
©
d
©
©
©
©
©
d
©
©
d
d
©
©
00
<*
-f
p-
CO
©
00
X
,_,
ON
-f
-t
„
X
©
m
X
r-
X
©■
CJ
Cl
CO
©
©
CO
CO
©
-f-
X
r^-
X
CO
LO
m
CO
X
©■
r-
iO
©
X
«*
X
CM
CM
©
CM
p-
On
ro
-+
CO
C]
r~
CO
NC
CO
<*
X
-+
CM
©
CO
X
©
in
©
On
©
ON
ON
X
t~
NO
m
*
CO
X
m
Cl
©
X
NO
IO
-f
CJ
©
X
p-
NO
p-
m
m
Lfl
m
m
m
m
in
m
LO
-+
-+
-f
-f
co
CO
CO'
CO
en
en
CJ
CM
CM
•*
©
o
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
t-
r^
CO
©
CO
CM
-r
©
m
_,
CO
CO
X
_.
ON
■*
Tf
_
«3
X
m
en
NO
©
■*
NO
LO
iO
m
ON
-»■
CO
Cl
-t-
NO
CO
in
X
p-
©
©
IO
CM
nO
r~
O
©
ON
©
©
©
Cl
Cl
©
-r
CO
LO
©
LO
co
co
lO
©
X
00
co
ON
co
r^
r~-
NO
in
<*
CO
ON
NO
CO
©
r~
NO
in
CO
©
X
P-
"^"
in
m
in
m
in
in
LO
LO
LO
lO
*+
-+
Tf
-f
CO
CO
CO
CO'
CO
eo
■CO
CJ
CM
Tt
©
d
©
d
d
d
d
©
©
©
d
©
©
d
d
d
d
©
d
©
©
d
©
©
00
CO
nC
t—
CJ
©
©
_
_
-t
©
-+
ON
CM
CM
©
©
©
©
©
m
©
en
©
oo
CO
nC
ON
©
-r
©
CO
NO
©
X
r-
LO
-r
©
©
'©■
X
©
X
CO
©
p-
-t
CM
X
-+
nC
X
NO
X
LO
nC
©
«*
CJ
©
-+
©
X
00
©
p-
r~
p-
p-
B
nC
m
-1-
CO
CO
ON
NO
-r
Cl
©
©
X
P-
LO
CO
Cl
©
ON
©
irt
in
m
m
in
m
in
in
m
m
-t
-1-
•*
"t
-f
CO
CO
CO
CO
CO
en
CO
CM
lj<
©
d
d
d
©
©
©
©
©
©
©
©
©
©
©
©
d
o
d
©
©
d
©
©
CO
©
^ H
^ H
m
©
in
N*
CM
-t
-+
m
p~
ON
©
NO
-r
p-
©
in
©
©
T}l
©
00
55
CO
2
-t
-r
-+
©
«*
r^
X
r-
CO
X
©
m
Cl
-f
LO
©
-T-
m
©
p-
©
ON
m
©
lO
ON
CO
2
ce
-*
r—
Cl
©
oo
X
©
LO
CJ
©
n©
-f
in
m
-+
-f
N»
CO
CJ
Cl
On
r-
LO
CO
Cl
©
On
X
p-
LO
-f
ce
CM
s
m
in
m
in
m
LO
Lft
iO
LO
in
-+
rf
-*
-t
-f-
■*
CO
CO
co
CO
CO
CO
CO
CO
©
©
d
d
©
©
©
©
©
©
©
d
'©
©
©
©
d
©
d
d
■d
d
©
©
,_,
m
m
so
On
^
r^
__
CO
ON
CM
p-
LO
_
«*
,_,
,__
CO
Cl
©
©
©
en
g
IM
ON
NO
CM
CO
Cl
NO
t~-
NO
-+
©
©
©
NC
©
p-
p-
©
©
©
©
X
O
r~
p-
co
r-
in
CJ
X
•+
©
©
Cl
NO
Cl
©
X
X
©
n*
©
■©
X
ON
©
m
cj
CJ
C-J
CM
CJ
CJ
On
p-
LO
-f-
CJ
©
©
X
t-
lO
Tf
eo
CO
m
m
in
m
m
m
in
m
LO
m
-f
-f
-t-
-f
-f-
-f
-t
CO
CO'
eo
CO
CO
en
CO
o
O
©
©
©
©
©
©
©
©
©
'©
©
©
©
©
©
©
©
©
©
©
©
©
vO
m
NO
cj
t^
CO
©
CNJ
CJ
©
nO
o~
ON
„
P^
_
„
p-
©
„
r»
CM
CM
©
«
CO
©
X
X
©
o>
nC
t—
©
-I"
©
LO
©
CM
©
CO
-f
p-
CM
m
©
On
©
-1-
-+
CO
CO
©
f-
lO'
CJ
©
X
X
X
©
Cl
m
Cl
CO
in
©
■*
©
ON
ON
On
'ON
ON
ON
X
X
f-
nO
N*
CO
Cl
CJ
©
©
X
p-
NO
m
©
in
m
"+
<*
-f
-f
-f
-*
•+
-*
-+
■+
-*
Tf
-f-
cf
-+
-f
CO
co
en
CO'
en
CO
©
d
d
©
©
©
©
©
©
©
d
©
d
©
©
©
©
©
d
©
©
©
©
8
00
On
ON
p-
CJ
•*
-t
r~
CI
m
m
-+
^H
CM
-!-
NO
CM
_
X
m
X
Cl
in
CM
©
3
CM
©
m
nC
CO
X
©
r-
CJ
i
CJ
CO
'©
-f
lO
p-
'©■
X
©
§
ON
©
ON
On
X
t~-
NO
•*
CO'
Cl
Cl
Cl
CO'
p-
©
CO
Cl
Cl
CO
LO
X
©
co
CO
r~
t-
t-
r~
t~>
r-
t--
NO
m
-+
co
CM
©'
©
X
p-
©'
m
o_
in
-f
-f
•+
-f
»*
-t-
-f
-1-
-*
-f
-t
-+
^-r
"*
-f
-f
-f
CO
CO'
■co
CO
en
cm
©
d
d
©
©
©
©
©
©
©
d
©
©
©
©
©
d
d
©
d
d
©
©
©
NO
r~
co
C-J
©
CM
CO
•o
ON
nC
©
p-
-+
X
CO
X
m
ss
r~
©
©
_
rt
8
f-
CO
Cl
-f
o-
X
CJ
r-
Cl
r^
Cl
r^
10
CM
©
©
ie
©
©
■*
m
to
r^
©
r~
nO
NO
m
LO
•+
'O
Cl
p-
CO
©■
LO
cf
X
CO
X
eo
©
CO
•+
Cl
CI
©
©
0>
©
X
X
X
c—
©
©
lO
m
n©
m
-p
-f
-f
-f
>*
-f-
-f
-J-
*
-t-
-+
-t
CO
CO
CO
CO'
CO
en
CO
CO
CO
en
cm
©
©
©
©
©
©
©
o
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
__
^H
0>
tr.
©
in
X
Cl
^ H
-+
nC
©
p-
CO
nC
©
m
in
p-
CO
P-
en
^H
©
p-
NO
LO
On
©
m
Cl
©
CJ
nC
'On
IO
X
CO
NO
X
X
X
©
CO
X
©
00
-f
CO
CO
r~
CO
©
On
X
!S
m
CO
Cl
©
X
©
m
X
iO
CO
©
©
CM
p-
NO
m
m
m
m
-f
N*
-*
-T"
-+
-+
CO
CO
CO
CO
Cl
CM
Cl
CM
rj;
in
-f
en
CO
en
CO'
CO
CO
CO
CO
CO
co
CO'
CO
CO
CO
CO
CO
CO
en
CO
CO
en
cm
©'
d
d
d
©
©
©
©
©
d
©
©
©
©
'©
d
d
'©
©'
©
©
©
©
Q
CO
p-
CO
m
©
X
CM
ON
-t
X
m
p-
NO
p-
p-
en
On
10
O
©
CM
CM
©
CO
nO
CO
CO
r—
m
co
B5
f—
LO
ON
X
©
CM
-+
vC
i-
©
en
©
©
CM
m
-+
co
CO
t^
r~
1/5
CO'
CJ
ON
X
X
p-
NO
m
-+
CO
CM
©
©
P-
NO
CM
ON
*+
CM
©
©
©
©
9i
ON
ON
ON
©
©
■©
©'
©.
•©
X
X
X
co
m
CO
eo
CO
CO
co
CO
CO
CO
CO
Cl
CJ
Cl
Cl
Cl
Cl
Cl
CM
°i
CJ
Cl
Cl
CM
cm
©
©
d
©
©
©
d
©
d
©
©
©
'©
©
©
©
d
©
©
d
d
©
d
—
©
-t
p-
r^
©
X
0>
-+
-t-
-1-
-+
CM
sC
X
©
X
©
On
X
10
-+
X
Tf
CO
ON
ON
co
S
X
CO
r-
o-
-+
ON
m
nC
9
©
©
CJ
p-
CM
1
ON
On
r-
-+
nC
CO
X
X
©
-+
©
r~-
Cl
■o>
r—
I
NO
lO
lO
5
1
CO
-1-
r-
CO
ON
X
X
r^
f-
lO
lO
-+
-!■
-t
-+
-r
-f
Tt
r— 1
m
CO
OJ
CI
CM
CM
©
©
O
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
©
„
CM
CO
-t
m
NO
r-
X
ON
©
l0
©
in
©
m
©
m
©
©
©
©
©
8
*
S
'—'
•" '
Cl
CM
CO
CO
-*
•*
iO
NO
t»
X
©
508
e,„(Ko) =■
I. TIA(;0 DE OLIVEIRA AND S. B. LITTAUER
Table 4. Computations based on
kj; b(Ko)
jS.(ico) (/x, -/x,„) | (0 2 Uo) +3) (jx, -m,,,) 2
V6 ( Ko )
M*o)
Ko
0.10000
0.20000
0.30000
0.40000
0.50000
0.60000
0.70000
0.80000
0.90000
1.00000
m
1
0.00054
0.09960
0.37996
0.63355
0.80000
0.89873
0.95424
0.98339
0.99655
1.00000
2
0.00000
0.00000
0.00000
0.00010
0.00116
0.00560
0.01626
0.03414
0.05755
0.08333
3
0.00000
0.00000
0.00000
0.00001
0.00016
0.00096
0.00324
0.00762
0.01414
0.02222
4
0.00000
0.00000
0.00000
0.00000
0.00005
0.00032
0.00120
0.00306
0.00605
0.01000
5
0.00000
0.00000
0.00000
0.00000
0.00002
0.00014
0.00058
0.00158
0.00329
0.00565
6
0.00000
0.00000
0.00000
0.00000
0.00001
0.00007
0.00033
0.00094
0.00204
0.00362
7
0.00000
0.00000
0.00000
0.00000
0.00000
0.00004
0.00021
0.00062
0.00138
0.00252
8
0.00000
0.00000
0.00000
0.00000
0.00000
0.00003
0.00014
0.00043
0.00099
0.00185
9
0.00000
0.00000
0.00000
0.00000
0.00000
0.00002
0.00010
0.00031
0.00074
0.00142
10
0.00000
0.00000
0.00000
0.00000
0.00000
0.00001
0.00007
0.00024
0.00057
0.00112
20
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00001
0.00004
0.00011
0.00025
30
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00001
0.00004
0.00011
50
100
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00000
0.00001 .
0.00004
Ko
1.10000
1.20000
1.30000
1.40000
1.50000
1.60000
1.70000
1.80000
1.90000
2.00000
m
1
0.99753
0.99149
0.98334
0.97401
0.96409
0.95396
0.94385
0.93393
0.92428
0.91495
2
0.10856
0.13142
0.15119
0.16783
0.18166
0.19307
0.20248
0.21025
0.21669
0.22205
3
0.03108
0.04001
0.04854
0.05640
0.06351
0.06985
0.07548
0.08047
0.08490
0.08882
4
0.01459
0.01946
0.02434
0.02904
0.03345
0.03754
0.04128
0.04470
0.04781
0.05064
5
0.00851
0.01166
0.01492
0.01814
0.02125
0.02419
0.02695
0.02951
0.03189
0.03408
6
0.00560
0.00785
0.01022
0.01263
0.01499
0.01727
0.01943
0.02147
0.02339
0.02518
7
0.00398
0.00568
0.00752
0.00941
0.01130
0.01314
0.01492
0.01661
0.01821
0.01973
8
0.00299
0.00433
0.00581
0.00735
0.00891
0.01046
0.01196
0.01340
0.01478
0.J1609
9
0.00233
0.00342
0.00465
0.00594
0.00727
0.00859
0.00989
0.01115
0.01236
0.0 .^JL
10
0.00187
0.00279
0.00382
0.00493
0.00608
0.00724
0.00838
0.00949
0.01057
0.01161
20
0.00047
0.00076
0.00112
0.00154
0.00201
0.00250
0.00301
0.00353
0.00405
0.00458
30
0.00021
0.00037
0.00057
0.00081
0.00109
0.00139
0.00172
0.00206
0.00241
0.00277
50
0.00008
0.00015
0.00025
0.00037
0.00052
0.00069
0.00087
0.00108
0.00129
0.00151
100
0.00002
0.00005
0.00008
0.00013
0.00019
0.00027
0.00036
0.00046
0.00057
0.00066
WEIBULL DISTRIBUTION FORECASTERS
Table 5. Computations based on
Kg ~ 2k (ko — 1) T(l — 1/ko)m- + (ico- D 2 HI - 2/k )m 2 „
509
£jl
(KO) —
„-1)*6(ko) [ r (!-
K)](
j ft (ica
•>
1 (fj.i — fJL„,
(/3,Uo)+3) iin-
6(k )
M-) 2 j
/&(#»)
Kil
2.10000
2.20000
2.30000
2.40000
2.50000
2.60000
2.70000
2.80000
2.90000
3.00000
n
1
0.91696
0.91963
0.92273
0.92610
0.92961
0.93315
0.93667
0.94012
0.94347
0.94668
2
0.24844
0.27307
0.29596
0.31716
0.33677
0.35488
0.37162
0.38708
0.40138
0.41461
3
0.11212
0.13402
0.15448
0.17353
0.19123
0.20765
0.22288
0.23701
0.25012
0.26229
4
0.07178
0.09168
0.11030
0.12766
0.14380
0.15878
0.17270
0.18561
0.19761
0.20876
5
0.05391
0.07257
0.09003
0.10630
0.12144
0.13549
0.14854
0.16065
0.17190
0.18236
6
0.04413
0.06196
0.07864
0.09418
0.10862
0.12203
0.13448
0.14603
0.15676
0.16674
7
0.03806
0.05530
0.07141
0.08642
0.10036
0.11330
0.12530
0.13645
0.14679
0.15641
8
0.03396
0.05075
0.06644
0.08104
0.09460
0.10718
0.11885
0.12967
0.13972
0.14906
9
0.03103
0.04747
0.06282
0.07710
0.09035
0.10265
0.11405
0.12462
0.13443
0.14355
()
0.02883
0.04498
0.06006
0.07409
0.08710
0.09916
0.11034
0.12070
0.13032
0.13925
5
0.02297
0.03826
0.05250
0.06571
0.07795
0.08928
0.09976
0.10947
0.11847
0.12682
!0
0.02043
0.03526
0.04904
0.06182
0.07364
0.08457
0.09467
0.10402
0.11268
0.12070
'5
0.01902
0.03356
0.04705
0.05955
0.07110
0.08177
0.09163
0.10074
0.10917
0.11699
0.01813
0.03245
0.04575
0.05805
0.06941
0.07990
0.08958
0.09852
0.10679
0.11445
5
0.01751
0.03168
0.04482
0.05698
0.06819
0.07854
0.08808
0.09690
0.10505
0.11259
0.01705
0.03111
0.04413
0.05616
0.06727
0.07750
0.08694
0.09566
0.10371
0.11116
5
0.01670
0.03066
0.04359
0.05553
0.06654
0.07668
0.08604
0.09467
0.10264
0.11001
0.01643
0.03031
0.04315
0.05501
0.06594
0.07601
0.08530
0.09386
0.10176
0.10907
0.01602
0.02977
0.04249
0.05423
0.06503
0.07499
0.08415
0.09260
0.10040
0.10761
0.01573
0.02939
0.04201
0.05365
0.06436
0.07422
0.08330
0.09166
0.09938
0.10651
3
0.01551
0.02909
0.04164
0.05321
0.06385
0.07363
0.08264
0.09093
0.09858
0.10564
)
0.01534
0.02887
0.04135
0.05285
0.06343
0.07316
0.08210
0.09034
0.09793
0.10495
)
0.01521
0.02868
0.04111
0.05256
0.06309
0.07276
0.08166
0.08985
0.09740
0.10436
>
3.10000
3.20000
3.30000
3.40000
3.50000
3.60000
3.70000
3.80000
3.90000
4.00000
0.94977
0.95270
0.95548
0.95812
0.96060
0.96294
0.96514
0.96721
0.96916
0.97098
1
0.42687
0.43822
0.44877
0.45857
0.46769
0.47620
0.48413
0.49155
0.49849
0.50496
1
0.27361
0.28413
0.29395
0.30309
0.31164
0.31963
0.32712
0.33414
0.34074
0.34692
0.21914
0.22880
0.23781
0.24623
0.25409
0.26146
0.26837
0.27486
0.28097
0.28669
■
0.19210
0.20116
0.20963
0.21753
0.22492
0.23184
0.23834
0.24444
0.25018
0.25557
•
0.17602
0.18466
0.19273
0.20027
0.20731
0.21392
0.22011
0.22593
0.23141
0.23655
0.16535
0.17368
0.18146
0.18872
0.19550
0.20187
0.20783
0.21344
0.21872
0.22368
0.15774
0.16583
0.17337
0.18042
0.18700
0.19318
0.19897
0.20441
0.20953
0.21434
0.15202
0.15991
0.16727
0.17415
0.18057
0.18659
0.19224
0.19754
0.20254
0.20723
0.14756
0.15529
0.16250
0.16922
0.17551
0.18141
0.18694
0.19213
0.19702
0.20161
0.13457
0.14178
0.14850
0.15477
0.16063
0.16611
0.17125
0.17608
0.18063
0.18489
0.12815
0.13507
0.14152
0.14753
0.15314
0.15840
0.16332
0.16794
0.17229
0.17637
0.12423
0.13096
0.13723
0.14307
0.14852
0.15362
0.15840
0.16288
0.16710
0.17105
0.12155
0.12815
0.13428
0.14000
0.14533
0.15032
0.15499
0.15938
0.16350
0.16736
0.11958
0.12607
0.13211
0.13773
0.14297
0.14787
0.15246
0.15677
0.16082
0.16461
0.11806
0.12447
0.13042
0.13597
0.14114
0.14597
0.15050
0.15474
0.15873
0.16247
0.11684
0.12318
0.12907
0.13455
0.13966
0.14444
0.14891
0.15310
0.15704
0.16074
0.11584
0.12212
0.12795
0.13338
0.13844
0.14317
0.14760
0.15175
0.15564
0.15930
0.11428
0.12046
0.12621
0.13155
0.13653
0.14118
0.14553
0.14961
0.15344
0.15704
"'
0.11310
0.11921
0.12489
0.13017
0.13508
0.13968
0.14397
0.14800
0.15177
0.15532
■
0.11218
0.11823
0.12385
0.12908
0.13394
0.13849
0.14273
0.14672
0.15045
0.15395
;
0.11143
0.11743
0.12301
0.12819
0.13301
0.13751
0.14172
0.14567
0.14937
0.15284
0.11080
0.11677
0.12230
0.12745
0.13223
0.13670
0.14088
0.14479
0.14846
0.15190
LO
.1. TIAGO DE OL1VEIRA AND S. B. L1TTAUER
Table 5. Computations
based on (Cont.)
/eg -2
«o(k„ -1)1
'(1 - I/ Ko)
fin, + (/Co -
• 1)» I'd -
2/k„)m-„,
e«,(
Ko) =
K\ ( K
, - D-bM
[,-(,-
£)-'•*(
<i)}\
( 0,(ko)
\
(fl, — fJim)
{ (fiAKu)
+ 3) (/x, -
b («co)
^1
fb(Ka)
/Co
4.10000
4.20000
4.30000
4.40000
4.50000
4.60000
4.70000
4.80000
4.90000
5.00000
m
1
0.97269
0.97431
0.97583
0.97722
0.97855
0.97978
0.98094
0.98202
0.98304
0.98399
2
0.51107
0.51680
0.52219
0.52725
0.53204
0.53656
0.54081
0.54485
0.54865
0.555
3
0.35277
0.35828
0.36347
0.36838
0.37303
0.37744
0.38161
0.38558
0.38934
0.39293
4
0.29211
0.29722
0.30205
0.30662
0.31096
0.31507
0.31896
0.32268
0.32619
0.32956
5
0.26067
0.26549
0.27004
0.27435
0.27844
0.28232
0.28600
0.28951
0.29284
0.29603
6
0.24143
0.24602
0.25037
0.25448
0.25839
0.26210
0.26562
0.26898
0.27216
0.27521
7
0.22837
0.23280
0.23699
0.24096
0.24473
0.24830
0.25169
0.25493
0.25800
0.26095
8
0.21889
0.22319
0.22726
0.23110
0.23476
0.23823
0.24152
0.24467
0.24765
0.25051
9
0.21167
0.21586
0.21982
0.22357
0.22714
0.23053
0.23373
0.23680
0.23971
0.24250
10
0.20596
0.21006
0.21394
0.21761
0.22110
0.22441
0.22755
0.23056
0.23340
0.23613
15
0.18893
0.19274
0.19634
0.19975
0.20299
0.20607
0.20898
0.21177
0.21441
0.21694
20
0.18023
0.18387
0.18731
0.19057
0.19367
0.19661
0.19939
0.20205
0.20457
0.20699
25
0.17480
0.17833
0.18166
0.18482
0.18782
0.19067
0.19336
0.19594
0.19838
0.20073
30
0.17102
0.17447
0.17772
0.18081
0.18374
0.18651
0.18915
0.19166
0.19405
0.19633
35
0.16820
0.17159
0.17479
0.17781
0.18069
0.18341
0.18599
0.18846
0.19080
0.19304
40
0.16600
0.16934
0.17249
0.17547
0.17830
0.18098
0.18352
0.18595
0.18825
0.19046
45
0.16423
0.16752
0.17063
0.17357
0.17636
0.17901
0.18152
0.18392
0.18619
0.18837
50
0.16275
0.16601
0.16908
0.17199
0.17475
0.17737
0.17985
0.18222
0.18447
0.18662
60
0.16043
0.16363
0.16665
0.16950
0.17222
0.17478
0.17722
0.17955
0.18175
0.18386
70
0.15866
0.16182
0.16479
0.16761
0.17028
0.17281
0.17521
0.17750
0.17967
0.18175
80
0.15726
0.16038
0.16332
0.16610
0.16874
0.17124
0.17361
0.17587
0.17802
0.18007
90
0.15611
0.15920
0.16211
0.16486
0.16748
0.16995
0.17230
0.17454
0.17666
0.17869
100
0.15515
0.15821
0.16110
0.16383
0.16642
0.16887
0.17119
0.17341
0.17552
0.17753
Ko
5.50000
6.00000
6.50000
7.00000
7.50000
8.00000
8.50000
9.00000
9.50000
10.00000
m
1
0.98794
0.99079
0.99281
0.99424
0.99525
0.99591
0.99645
0.99664
0.99679
0.99682
2
0.56780
0.58004
0.58982
0.59782
0.60436
0.60987
0.61460
0.61846
0.62216
0.62531
3
0.40851
0.42101
0.43121
0.43971
0.44681
0.45289
0.45820
0.46267
0.46689
0.47058
4
0.34423
0.35607
0.36580
0.37396
0.38082
0.38673
0.39191
0.39632
0.40048
0.40413
5
0.30993
0.32120
0.33048
0.33828
0.34488
0.35057
0.35557
0.35985
0.36388
0.36743
6
0.28853
0.29933
0.30825
0.31577
0.32213
0.32764
0.33248
0.33663
0.34054
0.34399
7
0.27381
0.28425
0.29288
0.30016
0.30634
'0.31168
0.31639
0.32043
0.32424
0.32761
8
0.26300
0.27314
0.28154
0.28863
0.29465
0.29986
0.30446
0.30841
0.31213
0.31542
9
0.25468
0.26459
0.27279
0.27972
0.28560
0.29071
0.29520
0.29908
0.30273
0.30596
10
0.24806
0.25776
0.26580
0.27259
0.27836
0.28337
0.28778
0.29159
0.29518
0.29835
15
0.22802
0.23704
0.24451
0.25084
0.25623
0.26091
0.26504
0.26861
0.27198
0.27496
20
0.21758
0.22619
0.23333
0.23938
0.24454
0.24902
0.25298
0.25641
0.25964
0.26250
25
0.21097
0.21931
0.22623
0.23209
0.23709
0.24143
0.24527
0.24860
0.25173
0.25452
30
0.20633
0.21447
0.22122
0.22694
0.23182
0.23606
0.23981
0.24306
0.24613
0.24885
35
0.20285
0.21082
0.21745
0.22306
0.22784
0.23201
0.23569
0.23888
0.24189
0.24456
40
0.20011
0.20796
0.21448
0.22000
0.22471
0.22880
0.23243
0.23557
0.23854
0.24117
45
0.19789
0.20563
0.21206
0.21750
0.22215
0.22619
0.22977
0.23287
0.23580
0.23840
50
0.19603
0.20368
0.21004
0.21542
0.22001
0.22401
0.22755
0.23061
0.23351
0.23608
60
0.19309
0.20060
0.20683
0.21211
0.21661
0.22053
0.22400
0.22701
0.22985
0.23238
70
0.19084
0.19823
0.20436
0.20956
0.21399
0.21786
0.22127
0.22424
0.22704
0.22953
80
0.18905
0.19634
0.20239
0.20752
0.21190
0.21571
0.21909
0.22202
0.22478
0.22724
90
0.18757
0.19478
0-20077
0.20584
0.21017
0.21394
0.21728
0.22018
0.22292
0.22535
100
0.18632
0.19347
0.19940
0.20443
0.20871
0.21245
0.21576
0.21863
0.22134
0.22375
WEIBULL DISTRIBUTION FORECASTERS
511
TABLE 6. Computations based on
1+— (1 + lnm)<
(e«
1)/
1
2
3
4
5
6
7
8
9
10
15
20
0.67294
0.66695
0.62584
0.58335
0.54532
0.51222
0.48346
0.45831
0.43617
0.41652
0.34379
0.29635
25
30
35
40
45
50
60
70
80
90
100
0.26249
0.23687
0.21666
0.20023
0.18655
0.17495
0.15626
0.14177
0.13015
0.12057
0.11253
TABLE 7. Computations based on
- (1 + ln m) 2
B, In hi (02 + 3) • (In to) 2
7T 2 /6
Vtt 2 /6
1
2
3
4
5
6
7
8
9
10
15
20
0.97750
0.40975
0.27466
0.22518
0.19955
0.18373
0.17291
0.16497
0.15887
0.15401
0.13926
0.13151
25
30
35
40
45
50
60
70
80
90
100
). 12658
0.12310
0.12047
0.11839
0.11670
0.11528
0.11302
0.11128
0.10989
0.10874
0.10776
A COMPARISON OF SEVERAL ATTRIBUTE SAMPLING PLANS
Alan E. Gelfand
University of Connecticut
Storrs, Connecticut
ABSTRACT
The purpose of this research is to examine several types of procedures for attribute
sampling inspection — the widely used Military Standard 105D plans [8], the lesser known
Double Zero plans as developed by Ellis [4] and the Narrow Limit gaging plans of Ott and
Mundel [9]. Each of the procedures is described with an effort made to illuminate their
more subtle features. Then the plans are compared, whence it is revealed that (i) Narrow
Limit gaging plans have a serious weakness in comparison to the others and (ii) Double Zero
plans tend to be essentially conservative, but that sufficiently tight Military Standard 105D
plans can be selected to achieve comparable performance in all ways.
fTRODUCTION
'he need for, and advantages of, employing a quality control program to guide a production
ss is well established. One basic device for checking the stability of such a repetitive process
statistical control chart initially developed by Shewart [10]. A sample of items is selected, and
ich item a measurement on a process variable of interest is taken. These measurements are
ned in conjunction with previous information on the process in order to make a decision as to
;er the process is in a state of control or is out of control.
n many situations the process variable is dichotomized and items are classified only in terms of
ter they are "good" or "defective." The acceptability of an item may be determined in various
For instance, we may have to refer to prescribed upper and lower drawing (or specification)
, or we may note whether or not it withstands certain testing procedures, or we may just "eyeball"
ter or not it is suitable for use. A sample of items is selected and examined to record the number
ectives obtained. Such an approach is referred to as sampling inspection by attributes. Specifica-
f a sample size to be inspected along with the maximum allowable number of defectives specifies
ribute sampling plan. While such a "go — no go" approach retains less information than one has
he actual measurement value for each item, it has its advantages in terms of facility, efficiency,
ility, and cost considerations.
: is not our intent to examine the question of assigning individual component tolerances. Develop-
ich specifications is critical in attempting to achieve a desired overall level of performance for a
r system composed of various components. A good survey discussion of such ideas is contained
paper of Evans [5]. Rather, we shall assume that criteria for the acceptability of an item (compo-
r system) have been determined and that lots of production items are being subjected to attribute
ing inspection according to these criteria. Our concern focuses on the selection of an appropriate
ite sampling plan and the ramifications of such a choice. In discussing attribute sampling plans
ety of performance characteristics are often used. Among these, are the OC (operating char-
513
514 A. E. GELFAND
acteristic) curve, the AQL (acceptable quality level), the AOQL (acceptable outgoing quality leve
and the LTPD (lot tolerance percent defective). Numerous texts discuss these concepts. Among tl
most well known are Burr [2], Grant [6], Brownlee [1], and Johnson and Leone [7]. Virtually all tl
information regarding the performance of an attribute sampling plan is contained in its OC curv
The first extensive tabulation of such plans was developed by Dodge and Romig [3]. The emphasis i
recent years has been on developing plans to achieve certain specified types of performance. Amon
these are the extensive and widely used MIL STD 105D plans [8], the Ott-Mundel Narrow Limit Gagir,
plans [9], and the Double Zero plans developed by Ellis [4]. It is our intent to examine and compar
these plans.
The format of this paper, then, is the following. In the second section we shall describe the plan:
In the third section we show that, despite their different origins and intents, the plans are directl
mathematically comparable in terms of operating characteristic curves. We shall finally compare th
plans according to:
1. Inherent assumptions and suitability for application.
2. Philosophy of lot acceptance.
3. Consumer and producer protection.
4. Relative cost.
2. A DESCRIPTION OF THE PLANS
The clearest description of any attribute sampling plan can be obtained by examining its OC curv<
which is a plot of P a vs. p where P„ is the probability of acceptance of a lot containing a proportion, /
of defectives. We let N indicate the lot size, and denote an attribute sampling plan as an (n, c) pla
where n is the sample size to be inspected and c is the acceptance number. A lot is accepted if, at mos
c defectives are found in the sample of n. At any value of p, P„ is calculated exactly using the hypei
geometric distribution by
_ . . 1 £ (Np\ /N(l-p)
Pa{p)=-
:*
NX&K j I \ n-j
When N is large the hypergeometric distribution is usually approximated by the binomial distributioi
whence P a becomes
j=o v J '
This expression is exact if we assume an infinite lot size. Moreover, if n is quite large and p a
small, the Poisson distribution is employed to approximate P a , yielding
j; (np)Je-»P
Pa(p)~Z j]
j=o J •
Clearly, P a decreases in p and P„(0) = 1, P a (l)—0.
SAMPLING PLANS COMPARISON 5I5
he attribute sampling plans described in Military Standard 105D are the most widely used in
lited States today. These procedures, by their own description, have been developed for applica-
requiring the "continuous" inspection of end items, components, and raw materials, as well as
enance, administrative and data-keeping operations. These schemes are straightforward sampling
which are tabulated such that, given the lot size, N, and a selected AQL from a set of preferred
s, a particular (n, c) plan is specified. The AQL is nothing more than a choice of p which is deemed
table. Specifying N and an AQL does not uniquely determine n and c, but rather they are deter-
I such that the domain for the probability of acceptance, P a , at the AQL is usually somewhere
en 0.88 and 0.99. The plans allow for tightened and reduced inspection as well, changing P„
priately. More precisely, tightened inspection decreases P a at each p, hence lowering the OC
Similarly, reduced inspection increases P a at each p, thus raising the OC curve,
he Ott-Mundel Narrow Limit gaging plans and the Ellis Double Zero plans have been much less
;sed. This is due, in part, to the fact that neither of these approaches has been as extensively
as the Military Standard 105D plans. Additionally, these plans require the determination of
and lower specification limits for the process variable as opposed to the simple go — no go (de-
; — nondefective) determination needed for the Military Standard 105D plans. Finally, in terms
respective critical characteristics which form the bases of these plans, we shall show that there
i exists a Military Standard 105D plan which will achieve comparable performance. These re-
are not intended to disparage Narrow Limit gaging or Double Zero plans. In applications where
rformance criteria which these plans are designed to control are crucial these approaches directly
y plans which will provide the desired protection.
arrow Limit gaging plans and Double Zero plans are quite similar in their presumptions. Thus
>t surprising that Ott and Mundel discuss an application of their plans to a machine shop, while
idicates his plans are appropriate for aircraft and metal working industries with the likelihood
lar suitable areas of application.
;t us describe the Double Zero sampling plans first. The thrust behind such plans is to reject
lich are not zero-defective, i.e., totally free of defects. Sampling procedures which select an
jr some variant thereof, implicitly assume that the consumer is willing to accept lots containing
entage of defective items. Clearly, zero-defect plans would be very useful in certain industrial
itions.
le following assumptions are inherent in so-called Double Zero sampling plans.
Double Zero plans never knowingly accept a lot that contains defective items. That is, only
zero acceptance numbers are allowed.
In examining an item (or characteristic of an item) to determine whether or not it is defective,
it is crucial to assume that the measured variable follows a normal distribution. Such an as-
sumption is far from unreasonable. For example, experimentation in the metal-working industry
has shown that, for over 95 percent of samples, measurements were obtained which formed an
essentially normal frequency distribution.
In making the decision as to whether or not an item (or characteristic of an item) is defective,
it is presumed that there is a region of prescribed dimensional tolerance characterized by D/,,
the lower tolerance (or drawing) limit, and Di , the upper tolerance (or drawing) limit. Addition-
ally, the existence of a "gray area"" is assumed, which extends below Di. to the lower suitable-
for-use limit, S/,, and above Dy to the upper suitable-for-use limit Sv. The situation is described
516
A. E. GELFAND
in Figure 1. This "gray area" is established, not with the intent of accepting lots having in*
within this area, but rather with the intent of insuring that no lots with items beyond this aa
will be accepted.
S,.
D,.
D v
Su
Figure 1
The distinguishing feature of Double Zero plans is in the examination of the extent of nonconfoi:
ance of defective items rather than the percentage of nonconforming items. If, in Figure 1, an it«i
yielded an observed measurement X, then the extent of nonconformance could be computed, asi
percent of the given tolerance, by
X-Dc
D, ~ D L
x 100.
For applications where product may still be suitable for use even if nonconforming, this ext t
is more useful than just a decision to reject. Note that if X fell between Dl and Dv, the observatio i
within specification and the notion of "extent of nonconformance" isn't meaningful.
A Double Zero sampling plan is characterized by specifying a percent of tolerance such tl.
under the "worst conditions" (to be clarified shortly), the probability is essentially zero that an iti
in the lot will exceed this percent of tolerance. Moreover, in the terminology "Double Zero," while 13
first zero refers to the fact that only zero acceptance numbers are allowed, the second zero refers!
the fact that the above probability is claimed to be always essentially zero. More specifically, a typil
Double Zero sampling plan might be denoted by 0050, and is intended to mean that a zero acceptar
number is being used and the chance of there being an item in the lot which is defective by more tli
50 percent of tolerance is virtually zero.
In determining what percent of tolerance to use for a particular item, one would choose a perce,
at most
Su ~ Dv
D L - D L
x 100,
in order to be essentially certain that the item is, at worst, in the gray area. For example, if the diamei
of an item is to be within 0.002" of specification and is suitable for use within 0.003" of specificatic,
then it is clear that any plan at least as tight as 0025 would be appropriate.
Let us elaborate further on the so-called "worst condition" mentioned above. Suppose an infin
lot size with proportion of defectives p. Then, under assumption (2), the measurements will be inc.
SAMPLING PLANS COMPARISON 517
ident with common normal distribution such that 1 — p of the area is between Di. and Eh , and p of
area is outside. Now suppose we choose a (0 =£ a *£ 1 ) such that ap is the probability of an observa-
1 above Du and (1 — a)p is the probability of an observation below D/,, and consider all possible
mal distributions (means denoted by p. and variances denoted by or 2 ) satisfying these requirements.
;n, if zy denotes the y th percentile under the standard (or unit) normal curve, Di < = p. + z\- ap cr and
= p + zu-a)p(T. Moreover, for any normal curve, the probability is essentially one that an observa-
1 will fall between p — 3a and p + 3o\ Hence the extent of nonconformance can be considered as
extent above Du, i.e., (p + 3cr) — D v or the extent below D L , i.e., D,. — (p— 3cr). By expressing the
ent as a percent of the total tolerance, we would obtain the percent above
( M + 3or)-( M + z,-^) xm ^ 3 -*,-„, xm = E{p . a)
\p + Z\- a p<T) — p-T Z(\-a)pCT Z\- a p Z{\- a )p
Note that the extent E{p: a) is independent of p and cr can be calculated for any given p and a
ag a standard normal curve. Due to symmetry considerations we need only examine the percent of
;rance above in terms of maximizing the extent. In other words, given any value of p, the "worst
lation," or condition of maximum extent, is achieved by obtaining
E(p) =max E(p: a) .
Osa«l
We note that:
(1) Again E(p) can be calculated for each p and is scale and location invariant, i.e., free of p and cr 2 .
(2) Clearly E(p) is strictly increasing in p since E(p;a) increases in p at a fixed a. Thus E(p)
is a one-to-one function of p with E(0) = 0, E(l) — <x >.
(3) Perhaps surprisingly, in calculating £(/>)., the maximum doesn't necessarily occur at a=l.
For example, with p = 0.06 it is easy to calculate that £(0.06, 5/6) =33.6 while £(0.06, 1) =
31.6 and in fact that £(0.06) =34.12 and occurs at a = 0.8267.
(4) The critical use of the normality assumption is apparent.
Since E (p) is monotone in p, one can find p such that, for example E (p) = 50. This corresponds
he 0050 sampling plan mentioned earlier (p is approximately 0.11). Because for all Double Zero
lpling plans c = 0, the sampling scheme is completely specified once the sample size n is given,
each lot size, N, and for each (n, 0) plan, we can calculate (by the OC curve for example) p such
: P a {p) — 0.05 and for this p compute E(p). Conversely, given /V and a particular extent E(p),
can find p and hence n such that P„ (p) = 0.05. Conveniently, Ellis presents (in his Figure 1) a
oh which, for various lot sizes, at a given value of E (p), immediately yields the appropriate sample
. (Presumably one would round n up to the nearest integer). It is obvious that n decreases in E (p).
ndustrial applications 0050 and 0025 plans typically are used corresponding to the tightness of the
able-for-use limit.
' Note that the presumption of an infinite lot size was necessary in using p to characterize a class
ormal distributions as part of the development of the maximum extent condition. Allowing finite
size means that for some choices of E (p) (hence p) we may not be able to choose n( =£ N) such
P a (p) = 0.05. This is clearly illustrated in the Ellis Figure. However the more crucial point is
the interpretation of this maximum percent of tolerance is fuzzy at best. It is accurate to say that,
518 A. E. GELFAND
given a lot of infinite size, at a quality level p which is acceptable five percent of the time, under normal
ity of measurements, the chance of any randomly selected item exceeding E (p) is essentially zero
What this has to do with the performance at the same quality level p of all items in any particula
finite lot not necessarily assumed to be from an infinite population which is being continuously samplec
is not at all clear. In fact, even if we accept the latter sampling supposition, the distributional assump
tions made regarding the measurements will be invalid unless p for the finite lot is assumed to be
exactly the same as the p for the infinite lot. This is unreasonable since the value of p will almost surel
vary over finite lots, and only for large TV would it be approximately the value for an infinite lot. Henc
the entire analysis may be very susceptible to error and highly inaccurate when lot sizes are smai
(for example the N = 10, 25, 50 lines in the Ellis Figure). Thus Double Zero plans only guarantee th
indicated protection when lot sizes can be assumed infinite and can be expected to be accurate onl
when lot sizes are sufficiently large, i.e., such that the earlier binomial approximation to the hypergec
metric distribution is good.
We also note that the above analysis has been done for any randomly selected measurement fron
the assumed independent and identically distributed items in the lot. However, for example, th<
largest and/or the smallest measurements in the lot will have very different distributions, so the develop
ment is not at all valid for the "worst" piece in the lot. Therefore the strength of the second zero in th
Double Zero plan may be suspect.
Finally the Double Zero terminology disguises the fact that the analysis has been done for lots
of a quality level which we would expect to reject 95 percent of the time. Selection of this 5 percen
level of acceptance is arbitrary and is not indicated in the notation. Moreover, the actual quality level.
p, at this value of P a is hidden as well and in fact not clearly defined for finite lot sizes by the above'
remarks. In any case, quoting the maximum extent at this one quality level hardly gives an indication
of the performance of the plan at other quality levels, except for the obvious monotonicity inferences,
i.e., for lots of higher quality (smaller p) we will accept more often and the maximum extent will be
smaller (with a similar statement for lots of poorer quality).
We now examine the Narrow Limit gaging plans. As we shall see, these plans approach the idea
of a "gray" area much more straightforwardly than do the Double Zero plans. Assumptions inherent
in the development of such plans are:
(1) We presume that the measured variable follows a normal distribution.
(2) We presume that a lower specification limit, Sl, and an upper specification limit, Su are given
and that these specifications are wide in comparison with the process capabilities which under
normality can be taken as 6a.
(3) We presume go — no go gages are prepared which are stricter than specification by an amount
ta (t given). Figure 2 illustrates the situation where G L and G v denote the gage lower and upper 1
limits respectively. The area between S L and Gl and between G v and Su may be thought of as a
"gray" area analogous to the Double Zero plans.
Si ta G L G v ta So
Figure 2
SAMPLING PLANS COMPARISON 519
Suppose we now consider a process which is producing p percent defectives. Then by our assump-
this area, p, lies completely below Sl or completely above Su. By symmetry we consider the latter
whereby Su = /x + zt - p cr and Gv = p.+ {z\- p — t)a. Then the chance that an item will not pass the
ow Limit gage is
p, = 1 - <t>(zi- p - 1) = <P(z„ + t)
e <t> is the standard (unit) normal cumulative distribution function. Any (n, c) sampling plan be-
:s a Narrow Limit gaging plan at a given t simply by plotting an OC curve of the probability of
stance P„ vs. p t instead of P a vs. p. In other words, as a result of narrowing the gage by tcr, we
increased the incoming percent defective from p to p t . The effect of increasing t is to increase p t
lence reduce P a for the lot. These last two sentences express the essence of Narrow Limit gaging
, but, surprisingly, Ott and Mundel fail to directly mention this in their article. Note that if t is
irge so that tcr is large relative to p or if p is quite small (virtually zero), then we shall be rejecting
if good quality because the process is producing items outside the Narrow Limit gage but within
pecification limits. The obvious correction for this problem is to either reduce t or increase c.
rhe Ott-Mundel paper offers little help in selecting for a given t an appropriate n and c except
jjgest sizes comparable to those used under a Shewart control plan.
lOMPARING THE PLANS
Having described the plans we now compare them. Since all three approaches result in the selec-
of an (n, c) plan, the best comparison among such plans can be accomplished by examining their
curves. This is particularly true in view of the discussion raised in the previous section regarding
heoretical basis for Double Zero sampling plans. In comparing two (n, c) plans, it is clear that at a
value of c the plan with the larger n will always be more stringent (i.e., conservative). Similarly,
jfixed value of n the plan with the smaller c will be tighter. Hence, given two plans (/ii, C\) and
\c-z), if tii 3z n- 2 and Ci =£ c>, clearly the former is superior to the latter. However if n x ^ n 2 , Ci 3= c 2
j ^n-z, Ci ^Cj, the two plans are not immediately comparable. This will frequently be the case
ji comparing Double Zero plans with Military Standard 105D plans, namely the former will employ
Ker samples with zero acceptance numbers, while the latter usually will employ larger samples
larger acceptance numbers.
(Fable 1 illustrates several 0050 and 0025 plans for various lot sizes, acceptance probabilities at
jus AQL's, and p such that P„(p)=0.05. For infinite lot size the 0050 plan calls for a sample of
I while the 0025 plan calls for a sample of size 87. Table 2 offers for these lot sizes several different
iry Standard 105D plans according to a specific AQL along with the acceptance probability at the
and again p such that P a (p) =0.05. As observed earlier it is not possible for finite lot size to obtain
lble Zero sampling scheme for every percent of tolerance. For example, a lot size of more than
.necessary for the existence of a 0025 plan. Comparably for Military Standard 105D procedures,
er preferred AQL's are allowed as the lot size increases. Note that only if the AQL is sufficiently
will the Military Standard 105D plan specify a zero acceptance number and eventually a sample
■ least as large as demanded by a Double Zero plan. Hence for many lot sizes, if a moderate AQL
:d to determine the Military Standard 105D plan, the Double Zero plan will be superior.
520
A. E. GELFAND
Table 1. Several Double Zero Sampling Plans
0050 plans
Lot size
Sample
size
Acceptance
number
P«(1.0)
P (1.5)
P«(2.5)
P
50
20
0.78
0.68
0.53
10.8
500
23
0.79
0.70
0.56
11.5
2000
25
0.78
0.69
0.54
11.0
0025 plans
Lot size
Sample
size
Acceptance
number
P B (.10)
P„(.15)
P«(.25)
P
150
67
0.85
0.79
0.70
3.8
500
78
0.92
0.88
0.78
3.5
2000
85
0.92
0.88
0.78
3.0
Table 2. Several Military Standard 105D Plans
Lot size
AQL
Sample
size
Acceptance
number
P„(AQL)
P
50
0.65
20
0.89
13.9
1.00
13
0.88
20.6
1.50
8
0.88
31.2
2.50
5
0.86
45.1
150
0.10
125
0.89
2.4
0.15
80
0.88
3.7
0.25
50
0.89
5.8
500
0.10
125
0.89
2.4
0.15
80
0.88
3.7
0.25
50
0.89
5.8
1.00
50
1
0.90
9.1
1.50
50
2
0.96
12.1
2.50
50
3
0.97
14.8
2000
0.10
125
0.89
2.4
0.15
80
0.88
3.7
0.25
200
1
0.91
2.4
1.00
125
3
0.95
6.2
1.50
125
5
0.98
8.4
2.50
125
7
0.98
10.5
Table 3 describes several Narrow Limit gaging sampling plans. In part (a) of the table for variou:
p's (AQL's) the corresponding p,'s are obtained for t — 0.5, 1,0, and 1.7. These are the t values sug
gested in the Ott-Mundel article. As can be seen, the effect on the incoming percent defective due U
SAMPLING PLANS COMPARISON
521
[gaging is rather drastic. In part (b) of the table we consider each of the Military Standard 105D
Is in Table 2 and notice the effect on P a at the AQL as the gage changes through < = 0(no gaging),
: 1.0, and 1.7. Again the effect is profound. Clearly, 1.7cr gages are absurdly tight while gages less
0.5o~ are hardly worth constructing. Hence the basic fault with Narrow Limit gaging plans is
irent; even a modest amount of gaging (t = 0.5, 1.0) will result in a much too severe sampling plan.
;over the situation is seen to grow worse with increasing lot size. The only recourse is to increase
lich merely indicates a willingness to accept lots of poorer quality in the first place. At best then,
ow Limit gaging when applied to Military Standard 105D plans may be thought of as tightened
ection. As a result we devote the remainder of the article to comparisons between the latter plans
Double Zero plans.
In particular let us focus on the areas mentioned in the introductory section.
TABLE 3. Several Narrow Limit Gaging Sampling Plans
(a)
.5(7
l.Oo-
1.7<r
P
Pi
Pi
Pi
0.10
0.48
1.8
8.2
0.15
0.68
2.4
10.2
0.25
1.00
3.5
13.4
0.65
2.40
6.9
21.8
1.00
3.40
9.3
26.8
1.50
4.70
12.1
31.9
2.50
7.20
16.9
39.7
(b)
Lot size
AQL
Sampling
Oct
.5a
l.Oo-
1.7o-
plan
Pa (AQL)
P„(AQL)
P n (AQL)
Pa(AQL)
50
0.65
(20,0)
0.89
0.68
0.24
1.00
(13,0)
0.88
0.56
0.26
0.02
1.50
(8,0)
0.88
0.70
0.42
0.04
2.50
(5,0)
0.86
0.65
0.27
0.09
150
0.10
(125,0)
0.89
0.52
0.11
0.15
(80,0)
0.88
0.63
0.16
0.25
(50,0)
0.89
0.61
0.17
500
0.10
(125,0)
0.89
0.52
0.11
0.15
(80,0)
0.88
0.63
0.16
0.25
(50,0)
0.89
0.61
0.17
1.00
(50,1)
0.90
0.48
0.05
1.50
)50,2)
0.96
0.59
0.05
2.50
(50,3)
0.97
0.51
0.02
200
0.10
(125,0)
0.89
0.52
0.11
0.15
(80,0)
0.88
0.63
0.16
0.25
(200,1)
0.91
0.39
0.01
1.00
(125,1)
0.95
0.38
1.50
(125,3)-
0.98
0.40
2.50
(125.7)
0.98
0.31
522 A - E GELFAND
(1) Regarding underlying assumptions for suitable application, Military Standard 105D plans a
applicable to any items inspectable on a dichotomous basis. In the same sense, Double Zei
sampling procedures are also widely applicable when considered as ordinary (n, c) plan
However only under the further assumptions of normality of measurements and large lot si2
will they afford approximately the additionally indicated protection.
(2) Regarding acceptable quality of lots, Double Zero plans strive for zero defectives and hence
lot is unacceptable when it contains nonconforming items. The use of zero acceptance numbei
insures rejection in the case of known nonconformances. By comparison, Military Standar
105D plans do specify an AQL thereby implicitly allowing acceptance of lots containing know
or expected defectives. But these remarks obscure the issue. For any (n, c) plan we ca
compute the probability of acceptance at a particularly AQL and it will be greater than zer<
If one was primarily interested in protection to the level of zero defectives in the lot, pre
sumably one would use as small an AQL as possible in selecting a Military Standard 105D plas
(and likely obtain a zero acceptance number). Such a choice would essentially negate any ad
vantages due the Double Zero plans and in fact might yield a tighter plan (for example ai
AQL 0.10 in Table 2).
(3) Regarding consumer versus producer protection, the Double Zero plans are designed such the
at a fixed percent of tolerance the OC curves are quite similar regardless of lot size. In thi
sense, independent of lot size the consumer is furnished with consistent probability of rejectio;
of lots with nonconformances. As long as the vendor produces all items to requirements, a
lots will be accepted. On the other hand Military Standard 105D procedures, again by specif)
ing an AQL, furnish the vendor with the knowledge that lots with an acceptable rate of non
conformances will be consistently accepted. In selecting a Military Standard 105D plan at an
fixed AQL, lot size has a considerable effect on the OC curve (as may be seen somewhat ii
Table 2 at AQL 1.0). Hence the producer can substantially affect the protection offered by sue!
plans by determining convenient lot sizes. Therefore, proponents of Double Zero plans clain
to furnish the consumer with consistent risk while Military Standard 105D plans furnish tin
producer with consistent risk. But this is an oversimplification. At a given lot size, any samplinj
plan which is selected by the consumer and then employed regularly, will offer, in terms of its
OC curve, consistent probability of acceptance of lots with nonconformances. This protectior
can not be abused to the producer's advantage.
(4) Regarding relative cost, the expense for sampling under Double Zero plans will usually be less
than for Military Standard 105D plans, since with a zero acceptance number smaller samplt
sizes will be required to achieve satisfactory protection. In sampling situations where: (i) The
vast majority of lots are free of nonconformances; (ii) only a small percentage of lots are non-
conforming within the usual range of AQL's; and (iii) the measured variables almost always
follow a normal distribution; such sample sizes are more than adequate to detect nonconform
ance in the lot. These remarks support the selection of an appropriately tight (n, c) plan re
gardless of any notions of Double Zero sampling. The cost to the producer in terms of rejected
lots is again determined solely by the selection of such an (n, c) plan.
In conclusion, the major advantage offered by Double Zero sampling plans is that, at a given lot
size, they tend to be tighter than "usually" employed inspection schemes. However, selection of a
Military Standard 105D plan with sufficiently small AQL will in all ways achieve comparable perform
SAMPLING PLANS COMPARISON 523
Furthermore, if tightened inspection or somewhat equivalently Narrow Limit gaging modifica-
of Military Standard 105D plans are used, even greater stringency will be achieved.
REFERENCES
Jrownlee, K. A., Statistical Theory and Methodology In Science and Engineering (John Wiley
& Sons, New York, 1965), 2nd ed.
Jurr, I. W., Engineering Statistics and Quality Control (McGraw Hill, New York, 1953.)
)odge, H. F. and H. G. Romig, Sampling Inspection Tables (John Wiley & Sons, New York, 1959),
2nd ed.
Cllis, E. W., "Double Zero Attribute Sampling Plans." Annual Technical Conference Trans-
actions, American Society for Quality Control (1966), pp. 340-347.
£vans, D. H., "A Statistical Tolerancing Formulation," J. of Quality Technology 2, 226-231 (1970).
irant, E. L., Statistical Quality Control (McGraw Hill, New York, 1972), 4th ed.
ohnson, N. L. and F. C. Leone, Statistics and Experimental Design: In Engineering and the
Physical Sciences (John Wiley & Sons, New York, 1964).
Military Standard Sampling Procedures and Tables for Inspection by Attributes, MIL-STD-105D,
U.S. Government Printing Office, Washington, D.C. (Apr. 1963).
)tt, E. R. and A. B. Mundel, "Narrow Limit Gaging," Industrial Quality Control^, 21-8 (1954).
ihewart, W. A., Economic Control of Quality of Manufactured Product (D. Van Nostrand Co.,
Princeton, N. J., 1931).
SUBOPTIMAL DECISION RULE FOR ATTACKING
TARGETS OF OPPORTUNITY
Takasi Kisi
Defense Academy
Yokosuka, Japan
ABSTRACT
A player having only a definite number of weapons is hunting targets. His total hunting
time is also limited. Targets of opportunity with various values arrive at random, and as
soon as a target arrives the player observes the target value and decides whether or not
to shoot it down. The issue is what the decision rule is which guarantees him amaximum
expected gain during the hunting time. Poisson arrival of the targets, uniform distribution
of the target value, and the shoot-look-shoot scheme qre assumed. A decision rule is derived
which is not optimal but has a very simple form and gives almost as good value as the optimal
decision rule does.
TRODUCTION
Recently D. V. Mastran and C. J. Thomas [7] investigated a decision problem for attacking targets
opportunity: A player having only a definite number of weapons is hunting targets. His total hunting
le is also limited. Targets of opportunity with various values arrive at random, and as soon as a target
"ives the player observes the target value and decides whether or not to shoot it down. The issue is
lat the decision rule is which guarantees him a maximum expected gain during the hunting time,
tstran and Thomas derived relationships of dynamic programming type to obtain the optimal decision
e, and solved several interesting example problems.
More or less similar problems were also dealt with by several authors [1-6, 8]. Decision problems
this type are formulated rather easily by using dynamic programming, but often it is very difficult to
rive their closed-form solution. In some cases the numerical solution would be valuable, but the per-
;ctive upon the problem would be rather difficult to have with only numerical solutions. Closed-form
utions would be of much value for this purpose even though they were not exact solutions but only
Proximate ones.
In the following, a decision problem is dealt with assuming Poisson arrival of the targets, uniform
Itribution of the target value, and the shoot-look-shoot scheme of firing. A decision rule is derived
iich is not optimal but has a very simple form and gives almost as good value as the optimal decision
e does.
I SUMPTIONS AND FORMULATION
The problem dealt with below will be based on several assumptions, which we state first.
(1) Targets arrive randomly one by one. The interarrival times between successive targets are
umed to be independently and identically distributed with a density function
a(t) = [ Q
525
t>0
otherwise.
526 T KISI
(2) The value of a target is unknown to the player in advance. He knows only its probability distri
bution. The value is independent of both the values of the preceding targets and the interarrival tim<
of the target. For simplicity a uniform distribution is assumed here:
, . (1 0^t;^l
(2) g{v) = \0 otherwise.
(3) When a target arrives, the player observes its value exactly, and decides whether or not to
attack. The shoot-look-shoot scheme is adopted: If the target is worth hunting, he fires a missle with
kill probability p, partial damage being ignored. If he secures a hit, another shot is, of course, un-
necessary. If he fails, the player is forced to decide whether or not to fire a second time, and a similar
procedure, mentioned above, follows. It is assumed that the repeated firings waste no time. Any recall
of targets is assumed to be impossible: Once the player decides not to attack a target, any chance for
attacking the same target is lost forever.
(4) The player starts his hunting of duration t with a definite number m of missiles. The score of
the player is the total value of the targets he will hunt during t. In the following, such a decision rule that
maximizes his expected score is named optimal, and this optimal decision rule is sought.
Let f(t, m) be the optimal value function, i.e., the expected score to the player when time t and.
m number of missiles are remaining and the optimal decision rule is employed throughout the time t.
Let (f>(t, m\v) be the similar optimal expected score, on condition that a target of value v has just
arrived at t, and let <£>(t, m) be the optimal expected score, given that a target has arrived at t without
any information on its value.
Target value distribution is assumed uniform, and therefore
(3)
<t>(t, m)=\ (f>(t, m\v)dv
Jo
and the relationship between /and <P is obviously
(4) f(t,m)=l \e- Xu <&(t-u,m)du
or
(5) <D(f, m) = ^/(t. m) +/(*» m).
When a target arrives, the player observes the target and finds out its value. Then he makes ai
decision. The decision rule would take the form "attack the target if, and only if, its value v is not less
than a certain value c," where c is a critical level of target value that will be determined later optimally.
First let us consider the case vH^c. The expected score to the player evaluated at the instant just
before his decision is <f)(t, m\v). When the decision is reached a missile is launched, and it hits the
target with probability p. As mentioned above, the process runs so quickly that practically no time is
DECISION RULE FOR TARGETS OF OPPORTUNITY 527
ted. The player gets the value v, and thereafter he is expected to gain the total score fit, m — 1)
:e time t and m — 1 of missiles are remaining yet. If the missile misses the target with probability
p), the situation changes over to one similar to that just before the decision, but the number of
siles is diminished by one. The expected score evaluated at this time point is given by <f>(t,m — I \v).
the other hand, the target should be neglected when v < c. In this case neither time nor the missile
asted, and the expected score at the instant just after the decision is given by f(t, m). Thus,
1/. \\ {p{v+f(t,m-l)}+(l-p)<t>(t,m-l\v) v^c
4>{t, m \v) — \ ., .
17 [f(t t m) v<c.
The optimal level c is dependent on both t and m, c=c(t, m) , and it is intuitively clear that the
ction is increasing in t and decreasing in m. In the following, this property will be utilized without
proof.
To see how the player fires and stops, let us suppose the case v = c{t, m). The player dares to
the first shot. Suppose he misses the target. The player is on the second decision point where the
nber of missiles is diminished by one. As mentioned above, c(t, m — 1) > c{t,m) , and so if the value
arge enough and v=^c(t, m — 1), he should make the second shot. If c(t, m — 1) >v=^c(t, m),
»rever, he should stop.
According to our decision rule, the player having m missiles should attack a target at t if its value
JSt c(t, m). But because of the continuity property of (f>(t, m\v) in v, the same expected value will
scored when the alternative choice, not to attack this target, is adopted here, and the optimal
ision rule is employed during the remaining period of time: From (6) we obtain
f{t, m)=p{cit, m)+fit, m — 1)}+ (1— p)(f>it, m — 1 |c(*, m))
<\>it, m — 1 |c(t, m)) —fit, m — 1)
ih lead to the relationship satisfied by the optimal c(t, m) ,
c{t, m)=-{fit, m)-fit, m-\)},
• > and m= 1, 2, . . . , where fit, 0) is defined as 0.
Before we solve the equations of the optimal value function, let us mention a word on the asympto-
irm ofc(t, m).
If the player has a lot of missiles and his hunting time is long enough, or more specifically, if
■ n>\, the number of targets which are expected to arrive during the time t is about \t. The number
issiles he possesses is m, with which the player can hunt only mp targets because he fires 1/p
ids on the average to shoot down a target. The player is therefore able to hunt only a fraction
^t of targets in his total time, and he should select targets with considerably higher value which
528 T. kisi
belong to the fraction mp/kt from the highest in the target value distribution. Let the critical value bf
c. Then \—c=mpl\t, and so c(t, m) would have an asymptotic form
(8) c(t,m)~l — — ■
OPTIMAL VALUE FUNCTION
In order to determine the optimal decision rule {c(t, m)} the equations satisfied by the optima'
value function f(t, m) should be derived and be solved.
From (3) and (6)
fc(t,m) n
<P(£, m)=\ (j)(t, m\ v)dv+ \ (f>(t, m\v)dv
JO Jc(t,m)
= c(t, m)f(t, m) +|{l-c(t, m) 2 }
+p{l — c(t, ro)}/(t, m — 1) + (1— p) I </>(«, m — \ | v)dv
Jc(t,m)
and
JrcV.m) ri
<f>(t, m-l\v)dv+ \ (f){t,m-\\v)dv
Jc(t,m)
= c(t, m)f(t, m — 1)+ I <^>{t, m — 1 \v)dv
Jc(t,m)
are obtained, and substituting these relations into (5) yields the following equations.
(9) \j t {f(t,m)-(l-p)f(t,m-l)}
= £{l-c(t,my}-{l-c(t,m)}{f(t,m)-f(t,m-l)} m=l,2, . . . .,
It is noted that the condition (7) for the optimal decision has not yet been employed, and (9) is therefore
the equation of the expected score /(f, m) corresponding to an arbitrary decision rule {c(t, m)}.
The equations satisfied by the optimal value functions are from (9) and (7)
(10) ~{f(t,m)-(l-p)f(t, m-l)}=^{/(t, m)-f(t, m-l)-p} 2 , m = 2, 3, . . .,
and
for m=\. Initial conditions are
DECISION RULE FOR TARGETS OF OPPORTUNITY 529
) /(0, m)=0, m=l, 2, . . ..
The solution of (11) with (12) is easily found to be
analytical solution of (10) seems rather difficult. Then approximate solutions of the following type
sought:
> /a ' m) ^ p { m "M^y}' »-!*£; v-
ien (14) is inserted into (10), we obtain a recurrence relation
) /,«=/,«-> + 1 + V2p/ W _, + 1, m=l, 2, . . ..
I /o = 0, *o = 2/X,
n the solution (14) is identical with (13) for m=l. For m i? 2, however, (14) renders approximate
itions, initial conditions (12) being violated.
Approximate value functions (14) are compared with the exact ones in Figure 1 for some values of
where the exact value functions are calculated numerically by use of (10) and (11). For smaller
xe of t the approximate formulae give a bit smaller value, but satisfactory agreement is observed for
;e t.
BOPTIMAL DECISION RULE
Equation (7) gives the true optimal decision level if f(t, m) is the optimal value function. When
roximately optimal /(f, m) is used in place of the true optimal, it will render a suboptimal decision
{c' (t, m)}. Let us adopt the approximate formulae given by (14). Then
[ , _fm-fm-l
I otherwise
*»»=T(V2p/ JB -, + l-l)
A.
Bi=l, 2, . . .. Suboptimal {c'{t, m)} is compared with {c(t, m)} in Figure 2. The suboptimal
sion level c'(t, 1) is identical with the optimal, but for m ^2 the suboptimal decision levels are
530
T. KISI
EXACT
APPROX
m=3
2 3
TIME REMAINING, Xt
Figure 1. Optimal value functions (p= 1)
OPTIMAL
SUBOPTIMAL
.6 -
Ul
>
uj .4
12 3 4
TIME REMAINING, Xt
Figure 2. Decision rules, optimal vs. suboptimal (p=l)
always lower than the corresponding optimal levels as is seen in the figure. The difference is sufficientl:
small for large t, but is by no means negligible for small t. Nevertheless, the suboptimal decision rul<
gives very satisfactory expected score even for small t . The solid line curves in Figure 3 are the optima
value functions, and dots are the values when the suboptimal decision rule {c' (t, m)} is employed, i.e.
/-values calculated from (9), (17) and (15). The difference is negligible. In Table 1 the relative loss inth<
DECISION RULE FOR TARGETS OF OPPORTUNITY
531
OPTIMAL
l 1
SUBOPTIMAL
m = 3 ^ l ^^-^^
/^ 2 ^*~~ - '
' '
1
1 — i
12 3 4
TIME REMAINING, Xt
FIGURE 3. Value functions, optimal vs. suboptimal (p= 1)
)ected score of using {c' (t, m)} instead of the true optimal {c(t, m) } is shown for the casep = 1. The
s is maximum around \t ~ mp, but even in this case the value is only 1.23 percent. When t is large
1 kt > mp, the loss is surprisingly small.
For p smaller than unity the loss grows a little. But calculation shows it never exceeds 3.2 percent.
;ure 4 tells the fact.
TABLE 1. Relative Loss in Using Suboptimal Decision Rule in Place of the Optimal. p—\
{/opt(«. m) -/subU, m)}lf opt (t, m) X 100
Nv Kt
m \^
1
2
3
4
5
6
7
8
9
10
1
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
2
0.27
0.71
0.42
0.25
0.16
0.11
0.07
0.05
0.04
0.03
3
0.04
0.60
1.02
0.72
0.47
0.31
0.21
0.15
0.11
0.09
4
0.00
0.19
0.76
1.16
0.91
0.63
0.44
0.32
0.23
0.17
5
0.00
0.04
0.32
0.85
1.21
1.02
0.76
0.55
0.41
0.30
6
0.00
0.01
0.10
0.42
0.90
1.23
1.09
0.84
0.64
0.48
7
0.00
0.00
0.03
0.16
0.49
0.93
1.23
1.13
0.91
0.70
8
0.00
0.00
.0.01
0.05
0.22
0.54
0.94
1.21
1.14
0.95
532
T. KISI
LU
cr
X
<
Z
3 -
2 -
.2 .4 .6 .8 I
KILL PROBABILITY
Figure 4. Maximum relative loss as a function of p
Finally it is worthwhile to mention that as was expected (17) has the asymptotic form (8) for largt
t and m. If we define A m =f m —fm-i — mp, then for t > t„,
(19)
c'(t, m) = l —
mp + A,
At + 2
We have an extra term 2(= kto) in the denominator, and an additional term A m in the numerator. The
latter is found always positive and A m //np — »0 when mp^* °° as seen in Figure 5.
50
40
30
20
10
t 1 1 1 1 r-
10
■ ■ '
20 30
m p
40
50
Figure 5. fm~fm-\ vs. mp (curves correspond to p= 1, 1/2, and 1/4 from the right)
DECISION RULE FOR TARGETS OF OPPORTUNITY 533
REFERENCES
eGroot, M. H., Optimal Statistical Decisions (McGraw-Hill Inc., 1970).
erman, C., G. J. Lieberman, and S. M. Ross, "A Sequential Stochastic Assignment Problem,"
Man. Sci. 18, 349-355 (1972).
onis, J. N. and S. M. Pollock, "Allocation of Resources to Randomly Occurring Opportunities,"
Nav. Res. Log. Quart. 14, 513-527 (1967).
ilbert, J. P. and F. Mosteller, "Recognizing the Maximum of a Sequence," J. Am. Statist. Assoc.
67,35-73 (1966).
oward, R. A., "Dynamic Programming," Man. Sci. 12, 317-348 (1966).
oyama, S., "Resource Allocation to Targets of Opportunity," (in Japanese) Master's Thesis,
Defense Academy, Japan, Feb. 1972.
astran, D. V. and C. J. Thomas, "Decision Rule for Attacking Targets of Opportunity," Nav. Res.
Log. Quart. 20, 661-672 (1973).
ikaguchi, M., "A Sequential Assignment Problem for Randomly Arriving Jobs," Rep. Statist. Appl.
Res. JUSE 19, 99-109 (1972).
OCCUPATIONAL STRUCTURE IN THE
MILITARY AND CIVILIAN SECTORS
OF THE ECONOMY*
Sheldon E. Haber
The George Washington University,
Washington, D.C.
ABSTRACT
This paper focuses on trends in the occupational structure of the military and civilian
sectors of the economy. Some implications of these trends for manpower policies for the
all-volunteer military establishment are examined.
RODUCTION
The need for a useful and manageable classification of occupations for military manpower manage-
is generally recognized. This need has been heightened in recent years as military technology has
need increasing the competition between the military and civilian sectors for skilled manpower,
advent of an all-volunteer military establishment further crystalizes the need for occupational ana-
wherein the two sectors are viewed as a whole and treated in an integrated manner.
Although there are a number of systems for grouping occupations in the civilian and military
»rs, the criteria employed in each system are different. The most important system in the civilian
<r is that developed by the Bureau of the Census. In the military sector, the Department of Defense
)) has established a classification scheme which ties together the systems utilized by the individual
ary services. The major objectives of this paper are, first, to bring together data from these two
;es in order to compare the occupational structure of the civilian and military work force and,
id, to examine the implications of this structure for manpower policies aimed at implementing the
)lunteer force concept.
While the military comprises only a small proportion of all workers, a substantial percentage of
ers in the craftsmen and related occupations are in the armed forces. Additionally, these occupa-
are among the most rapidly growing ones in the civilian sector. These findings suggest that policies
ted toward enlarging the supply of manpower to the military by increasing lateral transfer from
ivilian to the military sector may be constrained by the technology of production in the military
>r and the supply of skilled manpower in the civilian sector. More appropriate policies would focus
taking the military more attractive as a source of craftsmen training and of changing the military
uction function in a manner which reduces the requirement for skilled labor.
NDS IN THE OCCUPATIONAL STRUCTURE OF THE CIVILIAN AND MILITARY
IK FORCE
)ne of the more significant changes in the structure of the economy over many decades has been
;cline in the proportion of the work force performing manual tasks. Similarly, in the military sector,
'his paper was prepared under the Navy Manpower R&D Program of the Office of Naval Research under Contract Number
-67-A-0214, Task 0016. Project NR 347-024.
535
536
S. E. HABER
the proportion of personnel attached to ground combat and general duty military occupations has a!
declined. At the end of World War I, for example, 40 percent of the military work force was estimat
to be in this category. [13, p. 52]. By the end of World War II, the corresponding figure was appro
mately 24 percent. This may be compared with the percentage of the male workers classified as op<
atives and laborers (including those in farming) which stood at 33 percent in 1950. Changes in ti
occupational distribution of the civilian and military sectors since 1940 are shown in Tables 1 and
In each table, occupations are aggregated into major groupings based on the Census and DOD clas:
fication systems, respectively.*
As can be seen from Table 2, during the post-World War II period, the occupational distributi
of the military sector has changed considerably. The ground combat and services occupations ha
diminished in importance while the electronic and mechanics and repairmen occupations have gain
in importance. Of some interest, the proportion of enlisted personnel in ground combat positions c
not rise during the Vietnam War, suggesting that this proportion may have fallen as a result of rece
reductions in military strength.
Although it is clear that since World War II similar changes have occurred in the occupation
structure of the civilian and military sectors, a number of difficulties in comparing the Census and DC
data should be mentioned. An obvious difficulty is that there is little overlap in the titles of the maj
occupation groupings. The absence of farmers and farm managers from Table 2, for example, is hard
surprising. But one notices that professional and managerial workers are also missing from this tabl
Some workers who are classified as professionals by the Census, such as photographers and a
countants, are classified as "other technical" and "administrative and clerical workers," respectivel
Table 1. Occupational Distribution of Employed Males, 1940-1970
Occupation group
Professional, technical, and kindred workers
Managers and administrators, except farm ...
Farmers and farm managers
Clerical and kindred workers
Sales workers
Craftsmen, foremen, and kindred workers....
Operatives and kindred workers
Laborers, except farm
Farm laborers and farm foremen
Service and private household workers
Occupation not reported
Total 3
Percent distribution
1940 1950 1960 1970
6.1
9.6
14.8
6.0
6.7
14.9
17.9
8.9
8.2
6.1
0.7
100.0
7.3
10.7
10.3
6.4
6.4
18.6
20.0
8.2
4.9
6.1
1.1
100.0
9.9
11.0
5.5
6.7
6.9
19.9
18.8
7.2
2.8
6.5
4.6
100.0
13.5
10.6
2.7
7.2
6.8
19.7
18.2
6.1
1.6
7.7
5.9
100.0
a Rounded to 100.0 percent.
Sources: Bureau of the Census, Census of Population, 1970, General Social and Economic
Characteristics, U.S. Summary, PC(1)-C1, Washington, D.C., U.S. Government Printing Office,
1972 and Census of Population, 1950, Characteristics of the Population, U.S. Summary, Vol. II,
Pt. 1, Washington, D.C., U.S. Government Printing Office, 1953.
*The date for the military in Table 2 refer to authorized enlisted personnel strength rather than the actual personnel in ea(
occupational group. As indicated in [13, p. 193], differences between authorized and actual strength have been small.
MILITARY AND CIVILIAN OCCUPATIONAL STRUCTURE
TABLE 2. Occupational Distribution of Enlisted Positions, Selected Years 1945-1967
537
Occupation group
Percent distribution
1945
1953
1963
1967
Ground Combat
24.1
5.8
7.2
15.3
20.0
9.2
16.6
1.9
17.3
9.5
7.3
20.6
22.3
6.6
15.4
14.1
14.2
8.1
19.9
24.5
7.2
11.9
14.1
14.7
7.7
18.4
26.1
6.8
12.0
Administrative and clerical
Miscellaneous
Total a
100.0
100.0
100.0
100.0
"Rounded to 100.0 percent.
Sources: Harold Wool, The Military Specialist: Skilled Manpower for the
Armed Forces, Baltimore, The Johns Hopkins Press, 1969, and Bureau of the
Census, Statistical Abstract of the United States, Washington, D.C., U.S.
Government Printing Office. 1970.
e DOD. More importantly, Table 2 is restricted to enlisted positions and hence excludes officers.*
: approximately 15 percent of all military personnel are officers [6, p. 25], comparison of the oc-
ional distribution of enlisted personnel and civilian workers omits a not insubstantial segment of
lork force in the military sector.
Other groups missing from Table 2 are the sales, operative and laborer occupations. Although
military personnel are employed in sales occupations, e.g., commissary workers, the counterpart
e civilian sales worker is almost absent in the military. Less obvious is the fact that only a small
>er of detailed occupations in the military correspond to the operative and laborer occupations.
operatives and many laborers are employed in manufacturing, an activity which is almost wholly
cted to the civilian sector. 11
The foregoing suggests that in a number of occupation areas there is little direct competition be-
l the civilian and military sectors. This is not to say that competition is lacking, rather it is indirect
it individuals may choose between jobs and careers which are largely unique to the military, e.g.,
td combat, or largely unique to the civilian sector, notably in laborer, operative and sales occupa-
There are a large number of occupations, however, where competition between the civilian and
ry sectors is direct, i.e., where occupation skills overlap both sectors. This is true of the adminis-
e and clerical as well as the service occupations, but is of particular importance for the craftsmen
alated occupations t where shortfalls in the military sector are most likely to occur.**
"he DOD classifies officers into eight occupational groups: General officers and executives, tactical operations officers,
lence officers, engineering and maintenance officers, scientists and professionals, medical officers, administrators, and
> procurement and allied officers [5, pp. 1-5]. The second group includes pilots and aircraft crews and is similar to tin
I combat component for the enlisted force. The next four groups are primarily professional workers. The remaining groups
Ise the managerial class.
However, approximately one-quarter of all operatives in 1970 were drivers of automobiles, buses and trucks, a skill which
me importance in the ground combat specialties.
I I the civilian sector, mechanics and repairmen are classified together with other craftsmen. For the discussion that follows,
l;ful to distinguish mechanics and repairmen from other craftsmen as is done in the DOD classification.
JThe measurement of manpower shortages and surpluses in military occupations is an important but difficult problem,
res to remark here that quantitative estimates of imbalances are sensitive to how specific skills are grouped into detailed
H lions.
538
S. E. HABER
TABLE 3. Annual Rate of Growth in Craftsmen and Related Occupations: Military Positions, 1953-191
and Male Experienced Civilian Labor Force, 1950-1970
Enlisted Positions a
Experienced Male
Civilian
Labor Force a
Annual Rate
of Growth
Enlisted
Positions
Exp. Civ.
Male La-
bor Force
1953
1967
1950
1970
872
216
506
172
82
252
150
51
36
24
39
1,394
1,130
349
619
236
106
277
162
62
34
23
43
1,245
8,423
107
1,646
73
667
c 906
7,037
3,163
59
1,008
2,807
34,299
11,503
307
2,196
144
928
c 1,124
9,000
3,526
30
1,103
4,341
37,909
1.9
3.5
1.5
2.3
1.9
0.7
0.6
1.4
-0.4
-0.3
0.7
-0.8
1.6
5.4
1.5
3.5
1.7
1.1
1.2
0.5
-3.3
0.5
2.2
0.5
Other
Craftsmen (excluding electronics techni-
cians and mechanics and repairmen)...
Construction and utilities' 16
Other
Total
2,266
2,375
42,722
49,412
0.3
0.7
a Number in thousands.
b Includes radio and television mechanics.
c Excludes air conditioning, heat and refrigeration mechanics, and radio and television mechanics.
d Includes stationary firemen, power station operators, construction managers and construction apprentices.
e Includes air conditioning and heat and refrigeration mechanics.
'Includes sailors, and deckhands and boatmen.
"Includes welders and flamecutters, and metalworking apprentices.
h Excludes occupations cited in footnotes d, f, and g.
Source: Bureau of the Census, Census of Population, 1970, Detailed Characteristics, U.S. Summary, PC(1)-D1,
Washington, D.C., U.S. Government Printing Office, 1973 and Census of Population, 1950, Characteristics of the
Population, U.S. Summary, Vol. II, Pt. 1, Washington, D.C., U.S. Government Printing Office, 1953.
In Table 3, the craftsmen and related occupations are separated into three categories: Electron!
technicians, mechanics and repairmen, and craftsmen (excluding mechanics and repairmen). Includ
among electronics technicians are radio operators who in the civilian sector are classified by the Cens
as professional workers. Likewise, included among craftsmen (excluding mechanics and repairmt
are some workers, principally in the construction industry, who are classified as operatives and managt
by the Census.*
A number of interesting inferences can be drawn from this table. The craftsmen and related <
cupations now comprise almost one-half of all enlisted positions (and approximately 40 percent of V
entire military work force including officers). In the civilian sector, about one-fifth of the experienc
male civilian work force t are in these occupations. Hence, although the experienced male civili
labor force is more than 20 times as large as the enlisted military work force, the number of civili
craftsmen is only 10 times as large as the number of craftsmen in the military. Additionally, more th.
* Civilian occupations corresponding to military groupings shown in Table 3 are found in [13].
tThe experienced labor force includes the employed and unemployed persons who have worked at any time in the F a
MILITARY AND CIVILIAN OCCUPATIONAL STRUCTURE 539
lalf of the craftsmen in the military are concentrated in mechanics and repairmen occupations. In
3 occupations, the ratio of civilian to military workers is only 3.5 to 1. Thus, for this important
p of occupations, labor market conditions in the civilian economy will have important implications
nanpower management in the military sector, and, conversely, manpower policies in the latter
»r can have an important impact on the former sector. For example, in a draft environment, much
e training provided by the military will be lost when draftees return to civilian life and embark on
;rs of their own choice. In an all-volunteer military establishment where individuals are much more
f to receive the type of training they desire, there is a correspondingly greater likelihood that if
choose a career in the civilian sector, it will be in a field related to their military training.*
rhe data in Table 3 indicate that for some occupational groups the ratio of civilian male workers
litary workers is high, e.g., in construction and utilities and the metal working trades. Where this
the possibilities for lateral transfer between the civilian and military sectors may be good. Where
atio is low, however, the pool of manpower in the civilian sector available for work in the military
ely to be small and lateral transfer may be difficult to effect. It is of some interest, therefore, to
that for the electronics and aircraft mechanics and repairmen occupations, which comprise more
one-half of the positions in craftsmen and related skills in the military, the ratio of civilian to
try workers is very low.
^ateral transfer may be difficult to effect for another reason despite its apparent attractiveness to
rilitary. The cost of training to the military can be quite high, as much as $15,000 in some spe-
is, but more typically $10, 000-$ 12, 000 (see [7]). If this amount is amortized over a 4-year period,
0-$3,OOO per year could be offered as a premium to workers trained in the civilian sector who
>t a temporary job assignment in the military. Lateral transfer is not without its costs to the
dual, however. Besides the possibility of some unemployment when the military contract period
there is the possibility of foregone promotion opportunities, lost seniority and pension rights,
le psychic costs of changing jobs, particularly if a change in location is required. These costs can
ge. For example, Galloway [8, pp. 35-39] estimates that the cost of changing from a job in New
nd to one on the Pacific coast, most of which is unrelated to the expense of moving family posses-
ranges from $3,000 to $4,000 per year over a 5-year time horizon. The point to be emphasized
s that in evaluating lateral transfer as a means for increasing the supply of manpower to the
jry, costs to the individual as well as benefits to the military need to be taken into consideration.
1 addition to the limited supply of civilian manpower in the craftsmen and related occupations
re to the needs of the military, the growth of these occupations in the civilian sector has been
;apid. During the period since World War II, the rate of growth of civilian craftsmen and related
jations has been 1.6 percent per year compared to 0.5 percent per year for the rest of the civilian
force. t To illustrate the cumulative effect of this difference in growth rates. $100 compounded
illy at a rate of 1.6 percent will grow to $200 in 44 years. Compounded at an annual rate of 0.5
it, it would take 139 years to double. It should be noted, moreover, that the most rapidly growing
'his suggests that earlier studies of transference of skills [12] and assessments of the extent to which military training
jjfic or general training may not be relevant in the context of a no-draft environment. If transference of skills does occur,
j-al and private returns on investment in human capital, represented by training in the military in craft trades and other
i as, could be large.
nnual rates of growth are shown since the data in each sector cover different periods of time. The annual rate of growth
ruled by solving for r in the relationship Y = A(l + r)< where A and Y are the number of individuals in an occupation at
Hal and terminal dates, respectively, and t is the number of years between the two dates.
540
S. E. HABER
craftsmen occupations in the civilian sector — electronics technicians and aircraft mechanics \
repairmen — are also the occupations for which the demand for manpower has risen most rapidly in
military. Additionally, demand in these occupations is increasing faster in the civilian than in ,
military sectors. Hence, it is not surprising that the military has experienced difficulty in meeting
needs for personnel in these technical fields. Granting that the drawdown from the peak strength of i
Vietnam War will improve the short-run military manpower balance sheet, should these trends ci
tinue, they portend serious problems for the all-volunteer force which are now somewhat obscui
by the emphasis on procurement of combat arms personnel.
The trends noted raise another important issue for long-run manpower policy, namely, whe f<
the military should attempt to maximize initial enlistments or reenlistments.* At least two argunv:
can be offered for the latter policy. First, the recent decline in the birth rate will make future recn
ment of new entrants into the military sector more difficult (see [11]). Second, a military establishrn
comprised of a large porportion of career personnel would reduce the cost of training an all-voluni'
force. But as indicated above, the demand for personnel with craftsmen skills by the military is k
relative to the total demand for such services, and civilian craftsmen and related occupations havebi
growing more rapidly than other occupations, t particularly in the technical skill areas. Hence, the nt
ginal cost to the military of attracting additional career personnel could be substantially higher than i
average cost, particularly if the mechanism used to obtain personnel is across-the-board pay rail
versus selective pay raises or bonuses.
Although much attention has been given to the wage elasticity of enlistment (see [1], [3], [1 ,
there is some indication that the opportunity to learn a skill or trade is at least as important an
ducement to enlist as is military pay. For example, in the Gilbert Youth Survey [4, p. 68], skill/.)
training is cited almost as often as compensation as the incentive, if any, which would induce enl
ment. When attention was restricted to youths who indicated they might enlist at some time, train;
was rated by twice as many respondents as compensation as an incentive which would exert a stru
influence on the decision to enlist [4, p. 79]. This suggests the policy of utilizing the military as a vehi ;
for training a large number of young people for employment in areas where shortages are being •
perienced in the civilian sector (as in the craftsmen occupations) as an alternative to relying on •
enlistments as the major source of military manpower. This policy could provide a means for offsett :
the potential decline in manpower availability by increasing the propensity of those who are availa ■
to enlist for military service. It would also reduce the possibility of spiraling wage rates which coil
result from both sectors competing for the same body of workers to fill career positions. By providi:
initial training to inexperienced workers rather than attempting to maximize reenlistments, sui
competition would be reduced and retention costs lowered.
The policy of using the military to provide on-the-job training for young people as a means!
satisfying the manpower requirements of a volunteer military establishment is similar in some respe< ;
to Project 100,000 except that the emphasis is directed not so much at providing training to individu ;
*In one sense, the two options are compatible in that the larger the number of initial enlistments, the larger the numr
of reenlistments, all other things remaining equal. In another sense, they are not. The rapid rise in the cost of military manpov.
from 42 to 56 percent of the military budget between 1968 and 1974 [2], suggests that in the future it may be necessary to el"'
between making initial or career service more attractive, but not both. Recent discussions in Congress concerning the burgeon:
costs of military pensions are one indication that this choice may not be far off.
tThe same is also true for professional, technical and kindred workers. During the period 1950-70, the annual rate of
crease of male experienced workers in this occupation group was 4.2 percent.
MILITARY AND CIVILIAN OCCUPATIONAL STRUCTURE 541
otherwise would have difficulty in competing in the labor market as to individuals who can compete
who would normally opt for training in the civilian sector. Some indication that individuals and
ain institutions recognize the employment potential of craftsmen work is suggested by the in-
sing number of colleges which are offering vocation-oriented training.
The explicit policy of making craftsmen training in the military a more attractive option to young
>le would likely increase the defense budget. As an offset, however, this approach could ease the
ilem of maintaining a trained reserve to meet contingency situations, not to mention the substantial
d and economic benefits that might be realized.
VCLUSION
In this paper, the occupational structure of the civilian and military work force is examined,
some implications are drawn pertaining to military manpower policies in an all-volunteer environ-
t. The occupation data reveal that over time the less skilled components of the military work force
! been declining, paralleling changes in the composition of the civilian labor force. While this has
i known for some time, it is less well recognized that employment in the craftsman and related
3 in the military comprises a substantial proportion of total employment in these skills in the economy
rge. Moreover, male employment in the craftsmen and related occupations in the civilian sector
been growing at a substantially faster rate than total male employment. These trends suggest that
civilian and military sectors are competing for a limited supply of male workers, and hence raises
irtant questions for manpower policy. Perhaps the most crucial issue is whether the military should
its needs for craftsmen and related skills through increased retention or through training programs
d at attracting new entrants. Both policies are likely to have a high price tag. The latter policy
:he virtue of providing a pool of trained, skilled manpower to the civilian sector and the potential
leeting military reserve requirements. The occupational data also suggest that although the benefit
e military of lateral transfer from the civilian sector is high, the possibilities of lateral transfer may
mited, since in those occupations where manpower shortages are most likely to occur, the ratio of
'an to military employment is very low.
jiven the constraint that occupational structure places on policies designed to increase the
lly of military manpower, the question arises as to the options that are most likely to succeed in
lg the long-run manpower needs of an all-volunteer force. One obvious way to correct labor market
lances is to increase compensation in occupations where a shortage of personnel exists. However,
nay be costly if pay increases are directed toward increasing retention. An alternative approach
i focus on first-term enlistees and emphasize the noncompensation inducement of training. The
't of this approach is to increase the supply of skilled manpower to the military. Adjustments can
ade also on the demand side. When the price of a factor input rises, or a shortage manifests itself
ase the price of factor is sticky, as is the case in military labor markets where market forces
[ite imperfectly, output levels can be maintained most efficiently by substituting other factors for
;ven factor. Where the given factor is skilled labor, this generally means substituting capital for
,to reduce the requirement of labor at all skill levels. Capital substitution has been taking place
military increasing output, e.g., firepower, per unit of labor input. The occupational data reviewed
s paper suggest, however, that the capital that has been developed has reduced the requirement for
lied personnel but at the same time has increased the requirement for more skilled personnel.
542 S. E. HABER
Thus, in the future, emphasis needs to be given to changes in technology which lead to the substitut
of less skilled labor for more skilled labor and reduces the requirement for the latter in absolute term
Notwithstanding the recent attention that has been focused on the difficulty of meeting groi
combat personnel requirements in the Army, this study suggests that in the near future, the servii
may have even greater difficulty in meeting requirements for skilled personnel in craftsmen i
related occupations.
Acknowledgment
Thanks are due to Tulay Demirler whose assistance in the research was most helpful.
REFERENCES
[1] Altman, Stuart H., "Earnings, Unemployment and the Supply of Enlisted Volunteers," Jourr,
of Human Resources, IV-1, 1969, 38-59.
[2] Carter, Luther J. et al., "Fiscal 1974 Budget," Science, 179, 9 February 1973, 550-1.
[3] Cook, Alvin A., Jr.. "The Supply of Air Force Volunteers," The RAND Corporation, Santa Moni<
California, 1970.
[4] Department of Defense, Attitudes of Youth Toward Military Service: Results of National Surve
Conducted in May 1971, November 1971, and June 1972. Manpower and Reserve Affairs, Repi
No. MA 72-7, August 1972.
[5] Department of Defense, Officer Occupational Conversion Table, Office of the Assistant Secreta
of Defense, Manpower and Reserve Affairs, DA PAM 611-11, March 1972.
[6] Department of Defense, Selected Manpower Statistics, OASD (Comptroller), Directorate for 1
formation Operations, April 15, 1973.
[7] Department of the Navy, Annual Training Time and Cost for Navy Ratings and NEC's, Bure;
of Naval Personnel, NavPers 18660, November 1962.
[8] Gallaway, Lowell E., Geographic Labor Mobility in the United States, 1957 to 1960. (Washingto
D.C.: Government Printing Office, 1969).
[9] Gillete, Robert, "Military R and D: Hard Lessons of an Electronic War," Science, 182,9 Novemb
1973,559-561.
[10] Gray, Burton C, "Supply of First Term Enlistees," in The President's Commission on an A
Volunteer Force, The Report of the Presidential Commission on an All-Volunteer Force, I
2-1-40. (Washington, D.C.: Government Printing Office, November, 1970).
The development of automated ships is an example of a technological change which can lead to a reduced demand
skilled manpower. The Air Force also appears to be giving serious consideration, and perhaps the other services as well, to cap
equipment which employs less skilled labor input, e.g., the use of unmanned, expendable drones instead of manned aire,
(see [9]).
MILITARY AND CIVILIAN OCCUPATIONAL STRUCTURE 543
Stewart, Charles T., Jr., "Demographic Trends and Naval Manpower Policies." The Graduate
School of Arts and Sciences, The George Washington University, Technical Report Serial
TR-1122,8Junel973.
Weinstein Paul A., and Eugene L. Jurkowitz, "The Military as a Trainer: A Study of Problems
in Measuring Crossover," Proceedings of the 19th Annual Winter Meeting, Industrial Relations
Research Association, San Francisco, California, December 28-29, 1966.
Wool, Harold, The Military Specialist: Skilled Manpower for the Armed Forces, Baltimore. The
Johns Hopkins Press, 1969.
ARMOR CONFIGURATIONS VIA DYNAMIC PROGRAMMING
R. E. Shear, A. L. Arbuckle, V. B. Kucher
USA Ballistic Research Laboratories
Aberdeen Proving Ground, Maryland
ABSTRACT
The problem of selecting materials, their thicknesses and order for armor designed
for the defeat of shaped charge threats, has been formulated as a constrained optimization
problem. The mathematical model provides an optimal order and thickness of each layer
of material such that the resulting armor configuration will be of minimum mass per unit
area subject to constraints on total thickness and shaped charge jet tip exit velocity.
NTRODUCTION
In the design of vehicular armor, the designer is confronted with a bewildering array of constraints
possibilities for the armor configurations. Given a vehicle, the designer must consider the threats
fend against, materials to use, and how to construct or fabricate the armor. Constraints are imposed
rms of thickness, weight, and technology as well as location of vulnerable areas and components.
In the following, it will be assumed that the threat is due to a shaped charge and that the armor is
cated out of layers of preselected material of various thicknesses. The primary damage mechanism
e shaped charge is the high velocity metallic jet which is formed upon collapse of the metallic
:al finer in the charge. This jet can penetrate and, actually, perforate several inches of armor. The
ining jet, after perforation, may cause fuel and ammunition fires, incapacitate or kill crew members,
image vulnerable components. In addition, spallation of the back surface of the armor usually
aces the damage capability of the shaped charge. The reader interested in the mechanism of forma-
history, etc., of the shaped charge is directed to the paper of Birkhoff et al. [2] and the references
therein.
rhe selection and order of materials to be used in an armor fabricated of TV-layers of material for
igainst shaped charge threats will be formulated as an optimization problem. In particular, two
tnic programming problems have been formulated for the design of layered armor. Both models
imple penetration equations for calculating jet tip velocity, thus interface phenomena, jet particula-
etc, are not described. Such equations and the resulting models, however, provide a good approxi-
»n of the exit velocities of continuous jets. The models, as developed, provide an effective rationale
le preliminary design stages of such armor.
ORMULATION OF MINIMUM MASS ARMOR PRORLEM
in this section the problem of selecting materials, their thicknesses, and their order such that
esulting armor configuration will be of minimum mass per unit area will be formulated. The
num will be subject to constraints imposed upon the total armor thickness and on the jet tip
ty.
f T denotes the total thickness of the armor; tu the thickness of the tth layer; p t , the density of
h layer; Z, the standoff distance; V, the initial striking velocity of the jet, and V e , the exit jet tip
:ty, then we seek to minimize the total mass per unit area of the armor, i.e.,
545
546 R. E. SHEAR, A. L. ARBUCKLE AND V. B. KUCHER
N
(1) Minimize Y p t tj
subject to the constraints
(2)
!"-• ;
(3) VJV^A.
and
(4) p,e{p (1) , p (2) , . . .,p<*>}
where p {i) denotes the density of the ith material. The p (,) 's, or the materials, are preselected, and tl
pi's and tfs are to be selected so as to minimize the total mass per unit area subject to the impost
conditions. The jet tip, impacting slab i with velocity V„ is eroded during the penetration process ai
the residual jet exits, if it perforates slab i, with exit velocity V e u The exit velocity is given by tl
empirical relation
Vei-Vigi(Zi,ti,pO.
For most metals the function g is of the form:
Zi
(5)
Zi + tt
where Z, is the standoff distance to the surface of the ith layer, r,= Vp./A-, p; is the density of th
ith slab, and p is the density of the jet material. Note that, for layered armor, the exit velocity oft!
jet, after perforating the ith layer, becomes the striking velocity at the (i + 1) layer. The constraii
equation (3) can then be written as
(6) V e IV=f\g i (Z i ,t i ,p i )^A
The minimization problem described by equations 1, 2, 4, and 6 will be cast as a dynamic prograr
by noting that the minimum will depend upon A, Z and T. These become the state variables of th
dynamic programming formulation. The decisions are to select the t ,'s and p,'s and the optimal polic
is that collection {p*, t*, p*, t*, . . ., p*, f*}which yields the minimum for given values of A, TandZ
The formulation as a dynamic program depends upon "The Principle of Optimality" as given b;
Bellman [1], viz.,
"An Optimal Policy has the property that whatever the initial state and initial decision are, the
remaining decisions must constitute an optimal policy with regard to the state resulting from
the first decision."
DYNAMIC PROGRAMMING ARMOR CONFIGURATIONS 547
Lei Hh(T, Z, A) =the minimum mass per unit area of an N-layered slab of armor of total thickness
th the property that the ratio of the exit velocity of the residual jet to the initial jet striking velocity
ss than or equal to A for a shaped charge at a standoff distance Z.
Relating the problem formulation and the definition of H N to the Principle of Optimality, we have
nitial state (7\ Z,A), the initial decisions, t N and p N , when the layer N is taken to be the slab,
ie surface is at a standoff distance Z, and the resultant state (T',Z',A') resulting from the initial
iion, defined by T' = T~t N , Z'=Z + t N , A'=Alg N {Z, t N , p,v). The remaining decisions, i.e.,
, t\-i, . • ., pi, t u must constitute an optimal policy with respect to the state (T',Z',A') where
f-t N , Z' = Z + t N and A'=Alg N {Z, t N , p N ). That is, the remaining decisions constitute the
mizing solution for the problem
N-l
Minimize V pit;
:Ct to
N-l
N-l
Y[gi(Zi,ti,p)^A'
i=i
the p's belong to the given set of material properties. The solution of the above problem is given
'v_.(7",Z',^').
The recursive relationship for the dynamic programming formulation is readily obtained by noting
the contribution to the mass per unit area which results from an initial decision {p N , t s } is p s t N -
•rding to the principle of optimality, the remaining decisions must constitute an optimal policy;
•fore, the minimum mass per unit area, with respect to the resultant state is //, V -i(7", Z', A').
L, making the best initial choice yields the recursive relationship:
H N (T, Z, ,4)=Min Min {pivfjv + H/v-iCr-fjv, Z', Alg N (t N , p, v ))}
'n Pn
Equation (7) is the recursive relationship of the dynamic program and is the computational basis
;he solution. The state variables can assume any value of interest, thus resulting in the embedding
|e original problem into a family of two-dimensional minimization problems.
tA METHOD OF COMPUTATION
The dynamic program, given by (7), involves three state variables which, in general, adds to the
nlexity of the computation, or, as Bellman describes it, the "curse of dimensionality". In this prob-
r however, the curse is not too damaging in that the special structure of the problem allows con-
1 able reduction in the amount of calculation necessary to solve the problem. For example, if Z
j r are given, then one readily sees that throughout all stages of the calculation the functions need
548 R - E - SHEAR, A. L. ARBUCKLE AND V. B. KUCHER
only be computed for values of T' and Z' such that T' +Z' = T+Z. Additional time and computation
savings may also be achieved by noting the discontinuities of the functions H N . The functions H x a
piecewise constant and have points of discontinuity due to the discrete choices of the p's or the choii
of allowable materials. The work of Haymond (3) is important at this stage, for Haymond conside
the discontinuity sets of the return function which are of the same form as the Hn functions her
The constraint set of equations is different from that of Haymond; however, it is conjectured th
Haymond's results can be readily transformed to the problem at hand. In short, if the discontinui
set of Hn is denoted by D(H.\), the discontinuity set of g, by D(g), and if D(g)*D(H,\) is the s
of products of the elements of the discontinuity sets of Hy and gj, then we have D(H,\) C D(H S -
UD(g) U (D(g)*D(H^)).
Thus, by saving the discontinuities of Hn-\ and g, candidates for points of possible discontinuiti
of H,\ can be predicted. This normally results in fewer function evaluations and a correspondii
reduction in computation which may be substantial if the discontinuities are widely separated and
one uses an effective algorithm for computation. In this problem, the discontinuities may be extreme
close and numerous depending upon N, the number of layers; therefore, in the present calculation,
course grid was used coupled with a modified bisection method to locate the points of discontinuit
No comparisons, in terms of speed of computation, have been made between the two methods, but
is clear that more function evaluations must be made with the latter approach.
Equation (7) is a recursive formula where the solution at stage N, or layer N, depends upo
stage N—l. The computation can proceed given the value of Hi(T, Z, A). For simplicity in expositio
it will be assumed that the material set (4) consists of metals only; thus the g-functions given by (I
can be ordered with respect to the densities. Since =£ Zj(Z + (,) *£ 1 for allZ and £,-, gi decreases wit
increasing density, i.e., if p, > p,, then gi < gj. The set of material represented by (4) shall be assume
to be ordered such that pi < p> < ... <p/, . Hi can then be defined as
pj if gi(T,Z)^ A
p,T if g- 2 (T,Z)^ A <gi(T,Z)
(8) Ht(T,Z,A) =
p k T if g k (T,Z)^A^g k . l (T,Z).
For values of A <g k (T, Z), H { (T, Z, A) is defined to be infinite, indicating that for these value:
of A, no material exists within the preselected material represented by (4). The values given by (8
are not, in general, applicable if materials other than metals are used; the main point is that Hi(T,Z,H
can be calculated, thus permitting the stage-by-stage computation. Later in this report configuration:
for spaced armor will be computed which will modify the calculation at alternate stages; 2, 4, 6, . • •
where the decision will include the possibility of the even numbered layers being air only. The gfunc
tion for air is slightly different from that of (5) but causes no computation difficulty.
Given Hi {T, Z, A) for selected values of T, Z and A thus permits calculations of H> (T, Z,A,
The calculation then proceeds sequentially with N. For example, if T=T , Z = Zo, and A=Ao, a nc
it is desired that H> {T , Z , A {} ) be calculated, then we can use (7) and (8). In particular, suppose tha
DYNAMIC PROGRAMMING ARMOR CONFIGURATIONS 549
laterial set consists of two materials (metals) of density p (l) and p ,2) , respectively. Furthermore,
ne that each layer thickness must be a multiple of Af = r /3. Thus, t> can assume the value of
2 • At since an armor of two layers is required. The initial decision is to choose the material density
yer two, p (,) or p (2) , and the thickness t-> = At or 2 • A* of layer two. For example, if we choose
dp (I) , then the immediate return, i.e., the mass of the 2d layer, is p (1) -Af, and, since our remaining
tion or policy must be optimal, we have, with this choice, the armor mass per unit area,
p<»>-At + J5Ti {T -At,Z + At,A o lg, (Af, p (,) ,Zo)).
or each choice of an initial decision p>, t> we obtain an expression or value similar to (9). By
nzing over all such decisions, H> (T , Z , A Q ) will be obtained. Examination of this example
that, in order to obtain H> (T , Z , A ), values of //, must be known at points
{T - At, Zo + At, Aolgi (At, p (1 \ Z ))
(7o - At, Zo + At, Aolg2 (At, p< 2 >, Zo))
(To - 2 • At, Zo + 2 • At, Aolg* (2 • At, p n >, Z ) )
To- 2- At, Zo + 2 • At, Aolg-, (2 • At, p (2) , Z ) ).
H-i(to, Zo, Ao) is then obtained by selecting the smallest value of return values given by expres-
(9) evaluated at the above points. In similar fashion, Hz, . . ., H\ can be obtained.
kRMOR CONFIGURATIONS OF MINIMUM MASS
"he dynamic programming formulation and solution of the minimum mass per unit area problem,
dons (1-4), provides information on the effect of total armor thickness, minimum layer thick-
standoff distance, number of layers, etc., upon the minimum mass per unit area of the armor. In
words, the dynamic program, Equations 7 and 8, includes, in the course of computation, a sen-
y analysis. In this section, some of these effects will be illustrated.
Jow, Equation 7 is solved recursively, first for N— 1, then N — 2, etc.; therefore, we first look at
iriation of minimum mass per unit area as a function of the number of layers, N. This effect is
iated in figure 1 for an armor of total thickness, T= 10.5 cm, standoff distance, Z = 4.0 cm, and
>ty ratio V e IV^0A. In this calculation each layer thickness is an integral multiple of 1.5 cm, that
! minimum layer thickness equals 1.5 cm. The maximum number of layers, therefore, is seven,
ach layer is 1.5 cm thick. In addition, for this example the density set (4) has densities corres-
ng to those of air, beryllium, aluminum, titanium and iron (steel). An additional constraint is im-
upon the solution in that air is considered as a candidate at layer n if, and only if, n is even. For
ftf the armor configurations given in figure 1, the ratio of velocities is less than or equal to 0.4, that
armor "defeats" the same threat. The mass/area number denotes the armor of minimum mass
lit area. For example, for N=l, steel would also "defeat" the given threat but the mass/area
I be substantially higher than that of titanium. We note from figure 1 that the minimum mass is a
I reasing function of N.
he minimum mass per unit area is also dependent upon the preselection of materials and in figure
illustrate this dependency. The first armor illustrated, again for the same condition as of figure 1.
fa minimum of 42.4g/cm 2 when the material density set consists only of densities of beryllium.
550
R. E. SHEAR, A. L. ARBUCKLE AND V. B. KUCHER
ARMOR THICKNESS = 10.5cm (4.2in)
STAND/OFF = 4.0cm (1.6 in)
EXIT VEL/STRIKING VEL < 0.4
LAYER THICKNESS = A MULTIPLE OF 1.5cm (0.6in)
DIRECTION OF JET TRAVEL ►
CONFIGURATION
Ti
-10.5cm-
NO.
OF
LAYERS
I
\-* — 3cm— *-|
MASS/AREA
47.4g/cm 2
43.65g/cm 2
39.6g/cm2
38.3g/cm 2
32.9g/cm 2
3l.5g/cm 2
FIGURE 1. Minimum mass/area vs. number of layers.
aluminum, titanium and steel. The second consists of steel, aluminum and air and we note the d
crease in the value of the minimum. The third armor is for the same class of material as of figure 1 ar
we note again the decrease in the value of the minimum.
The minimum layer thickness also has an effect upon the values of the minimum. The minimu
layer thickness is the value of the increment thickness, a decision value, and results from the di
cretization of the thickness. Of course, the minimum layer thickness may also be a technological i
fabrication constraint as well. The effect of minimum layer thickness is illustrated in Table I for At = 0.!
1.0, and 1.5 cm. In general, we see that the minimum is a nonincreasing function of Af.
Table I.
Effect of Minimum Layer Thickness, &T, Upon Minimum Mass/Area (gjcm 2 )
T — 9 cm \ Z — 4 cm
VcIV *= 0.4
><;
0.5 cm
1.0 cm
1.5 cm
i
70.20
70.20
70.20
2
44.70
44.70
47.20
3
40.35
41.70
43.20
4
39.15
39.30
43.20
5
35.25
36.60
39.15
DYNAMIC PROGRAMMING ARMOR CONFIGURATIONS
551
NUMBER OF LAYERS = 5
THICKNESS* I0.5cm
STAND-OFF = 4.0 cm
Ve/V < 0.4
JET TRAVEL »-
{Fe.Ti.AI.Be}
42.4g/cm2
35.55 g/cm 2
IE {Fe.Ti, AI.Be, Air}
^Be$ 32.9g/cm 2
I I I I I I I I
SCALE 1.5 3.0 4.5 6.0 7.5 9.0 10.5cm
FIGURE 2. Effect of component preselection upon minimum mass/ area.
r inally we illustrate in Table II the variation in the minimum mass per unit area with total armor
ness. Again we assume a given shaped charge threat at a standoff distance of 4.0 cm and require
the ratio of residual jet exit velocity to the initial striking velocity be less than or equal to 0.4.
lumber of layers varies from 1 to 5 and the minimum layer thickness in all cases is equal to 1.5 cm.
Table II.
Minimum Mass/Area (g/cm 2 ) as a Function of Armor Thickness, T
\ T
7.5 cm
9.0 cm
10.5 cm
1
58.50
70.20
47.40
2
48.63
47.25
43.65
3
46.80
40.56
39.60
4
46.80
39.66
38.30
5
46.80
36.93
32.90
ilNIMUM EXIT VELOCITY ARMOR
Another area of interest arises when one is confronted with the development of an armor con-
ition subject to constraints imposed upon the total thickness and the mass per unit area. In this
552 r. E. SHEAR, A. L. ARBUCKLE AND V. B. KUCHER
case, one is usually interested in an inequality constraint upon the mass, i.e., the mass per unit ar
must not exceed a given value. In addition, one is interested in minimizing the residual jet tip e
velocity subject to the above constraints. That is
Minimize V e \V
subject to
N
N
(10) ^pjtj^M
and
pMp^,. . ..>*>}
for a given standoff distance Z. The expression for V e jV is Equation (6).
The above problem can be cast as a dynamic program by defining:
Gn(T, Z, M) as the minimum value of the ratio of the jet exit velocity to the initial striking velocit
of a jet of a shaped charge, at standoff distance Z, which is directed against an Allayer armor of tot;
thickness T, and mass per unit area less than or equal to M.
The decision variables again are p,v and t\ and the choice of p/v-i, tN-i-, ■ ■ ■ must constitute ai
optimal policy. In this manner the following recursive relation is obtained:
(11) G N (T, Z, M)=Min Min {g N (t N , p N , Z)G S -i(T-t N , Z + t N , M-p N t N )}.
'„ p N
If we let the set (4) contain density values for metals only, we again note that if p, > p;, then g, < g
By ordering the densities in decreasing value, p (1) , p (2) , . . . p (/c) , Gi can be defined as: Let r denote th
smallest value of the superscripts 1, 2, . . ., k for which p (r) satisfies p (r) T =£ M then
(12) GAT,Z,M)=g r (Z,T,p^).
If there does not exist any such rfor a given M, Gi(T,Z,M) is defined to be infinite, signifying that ther
does not exist any material, in (4), which satisfies the mass constraint.
The computation, utilizing Equations (11) and (12) can proceed as previously described. Again w
note that Gn(T, Z, M) is a discontinuous function of M; therefore, the work of Haymond is agai
applicable. In particular, for a given value of M, there exists an armor configuration whose mass pe
unit area is M A *£ M and which has a minimum value of V e /V. The values of V e IV and M A will, in ger
eral, be the same as for the corresponding solution of Equations (2)-(6), hence, the solution to the abov
problem is not further described.
DYNAMIC PROGRAMMING ARMOR CONFIGURATIONS 553
CONCLUSION
In the preceding we have described two formulations of problems whose solutions are useful in
development of armor configurations for use in defense of shaped charge threats. The rationale
eloped is for continuous jets; however, the resulting configurations are directly applicable in the
elopment of such armor. The dynamic programming formulation provides useful information in
ird to the sensitivity of the solution to change in thickness, number of layers, etc., during the
rse of solution.
. ACKNOWLEDGMENT
The authors thank Dr. Frank Grubbs for his valuable assistance and encouragement in developing
presenting this approach to problems of engineering design.
REFERENCES
Bellman, R., Adaptive Control Processes (Princeton University Press, 1961).
Birkhoff, G., D. P. McDougal, E. M. Pugh and G. I. Taylor, "Explosive with Lined Cavities," J.
Appl. Phys. 19, 563-582 (1948).
Haymond, R. E., "Discontinuities of Optimal Return in Dynamic Programming," J. Math. Anal. &
Applic. 30, 639-644 (1970).
ON MULTIPOPULATION QUEUING SYSTEMS WITH
FIRST-COME FIRST-SERVED DISCIPLINE
K. Truemper
University of Texas at Dallas
ABSTRACT
Steady-state probabilities for a multipopulation single channel multiserver queuing
system with first-come first-served queue discipline are established. Special cases lead to
simplified formulas which sometimes can be evaluated using existing statistical and/or
queuing tables. Optimization aspects are considered involving control of service rates.
Several applications are presented.
INTRODUCTION
Real world queuing systems frequently receive input from several populations with differing
val patterns and/or service requirements. If queue discipline and service rules are such that the
iber of possible states for the system is small, one can often derive closed form solutions, or at
t solve numerically, for state probabilities and distributions of interest. This is, for example, the case
n priority rules are imposed. On the other hand, some queue disciplines, among them the first-
e first-served (FCFS) rule, typically lead to a large number of different states, and only in special
;s, e.g. when all populations are infinite and arrivals are according to a Poisson process with rate
»r population i as in [1] , can the number of states be reduced by arguments relying on the particular
^al and/or service pattern at hand.
Here a multipopulation queuing system of type M/Mjs with a large number of different states is
)duced and solved for the steady-state probabilities, using an approach that seems to be promising
)ther multipopulation queuing problems of the exponential type as well. Assumed are r different
llations. A customer of population i arrives according to a Poisson process with rate (xj\i, <(o if the
number of customers in the system, i.e., in the queue or being served, is j, and the number of
omers of population i in the system is t(i). Upon arrival, each customer enters a single channel
te with FCFS discipline, or goes immediately into one of s servers if possible. Service times are
•nentially distributed, and the service rate fxj of any occupied server depends only on the total num-
of customers in the system, hence is independent from the type of customer that is being served. It
be seen below that the assumption of exponential interarrival and service times is essential for the
tion method employed. This assumption clearly restricts the applicability of the model, though
uently insight into the characteristics of a system can be still gained from the results given here if
actual distributions can be approximated by exponential ones.
Examples of the system described above are the parts issue department of an automobile repair
•, and a long-distance calling system of a business with leased telephone fines. In the first (second)
, populations are the car mechanics of the shop and all customers buying parts (employees of the
rent departments of the company), and the servers are the parts issue clerks (telephone lines).
555
556 K. TRUEMPER
Applications for the model typically arise when services provided for customers of different popula
tions are identical or very similar in nature, and when the impact of system changes on quality of
service for a particular population is to be evaluated. For example, in the repair shop problem one might
want to know the effect on average waiting time for the car mechanics as the number of parts issue
clerks is varied. As will be seen below, the solution is of sufficient simplicity to allow closed-form
expressions and usage of existing statistical distribution and/or queuing tables in special cases. Further,
the results lend themselves to queuing system optimization, including considerations of opening and
closing additional servers.
Few results have been published about multipopulation queuing systems with FCFS discipline.
All such systems assume Poisson arrivals and exponential service times. Cohen [4] has developed
waiting time results for a single-server system with two infinite populations and different service rates
for each population. For the same system providing service to r infinite populations, Ancker and
Gafarian [1] have established steady-state probabilities and waiting-time results. The case of a single-
server system with one finite and one infinite population requiring identical service has been solved in
[16, 17]; the first reference includes numerical approaches and a computer program for transient
probabilities and system descriptors. Lastly, Kotiah and Slater [9] have developed expected queue
length and waiting time for the two-server system handling customers of two infinite populations with
different service requirements. The latter paper contains additional references of related queuing
systems.
The subsequent section 2 presents the steady-state equations for the queuing system introduced
above. Steady-state probabilities are then derived in section 3 using an approach which appears to be
promising for other multipopulation queuing systems of the exponential type. Section 4 specializes
the results of section 3 for particular queuing systems. Lastly, optimization problems of service rate
control are discussed in section 5.
2. STEADY-STATE EQUATIONS
The following notation will be used throughout:
r= number of populations
5 = number of servers
K—[k(l), . . ., k (r)] = vector representing the number of customers in the queue; k(i) =
number of customers of population./
L= [1(1), . . ., l(r)] = vector representing the number of customers in the servers analogous
toK
T=K + L
r
i=l
r
q= V t(i) = m + n
i=l
P^i. " ' x ~ steady-state probability that k(i) (l(i)) customers of population i are in the queue (being
served), i = l, 2, . . ., r, the particular queue order being indicated by ab ... x\ each
K, I
FIRST-COME FIRST-SERVED QUEUING 557
letter of the ordered n-tuple ab . . . x represents an integer between 1 and r and denotes
the type of customer in the corresponding queue position, where customers arrived in
order x, . . ., b, a.
_ V 1 n ab ... x
ab . . . x
Pt= 2 Pa,,
j L
kTl=t
,U) = arrival rate for population i if the total number of customers in the system is q, of which t(i)
are of population i.
a q ,ki, t (i)^0; ^ t(i) ^ q; ¥-i
r
,\ T = ^ a q ki, t (i)
1=1
(i q — service rate of each occupied server if the total number of customers is q.
fi o >0;¥q>0
"or brevity of formulation, the rate /io, defined to be zero, will be used. Further, K + i will denote
ector [&(1), . . ., k(i — 1), k(i) + 1, k(i+ 1), . . ., k(r)]; if K is the null vector, we will write i
id of + i. Analogous definitions hold for L + i.
To assure existence of a steady-state solution, we will require for the remainder of this section, as
ks in section 3, that «j = 0, Yj >j max , where y max is nonnegative and finite. This restriction implies
[he maximum number of customers allowed in the system is bounded by j max , which is almost
s the case in real world applications. The case of infinite populations is discussed in section 4.
Vith the above definitions the steady-state difference equations for the queuing system are
((XmkL + mfX m )po, L~ ^ <Xm-\ A-i, l(i)-lPo, L-i
i=l, J(i)#0
j=l
(a s \i, + sns)po,L = Y a s -i\t, iu)-iPo, L-i
i=i, KO^o L
+ ]£ (lU) + l — 8ij)Ps+iPi,L-i+j ;m = .
pi J
558 K. TRUEMPER
(lc) (a n + s kT + sfj, n+s )pfj- x ^a n + s -iX a ,t(a)-iP t ' l< j a> : [
+ 2 E (lU) + I-8ij) f in +s+ ipf+ {L ?j +j ; m = s; n>0
where 8,j is the Kronecker Delta. In accordance with definitions by other authors (see, e.g., Morse
[12, p. 68]), Equation (lc) is the queue equation while Equations (la, b) are the boundary conditions.
3. STEADY-STATE PROBABILITIES
The task of finding a solution for (la-c) consists of two parts; (a) finding a general solution for
the queue equations, and (b) specializing the general solution so that the boundary conditions are
satisfied. Locating a general solution for the queue equations of multipopulation queues is usually a
difncult matter since the underlying coefficient matrix typically does not exhibit the triangular or
almost triangular structure which so frequently occurs in single population queues. For example,
in the system considered by Ancker and Gafarian [1] and mentioned in the Introduction, restatement
of the equations resulted in a block-triangular coefficient matrix, each block involving r equations.
The approach taken here to obtain the general relationship between probabilities of the queue equa-
tions is quite different from the one in [1], but is an extension of the methods employed by Morse [12]
for single population queues. One first replaces each (sometimes complex) transition occurring in
the queue equation by one or more elementary transitions, and estimates a separate relation for each
such elementary transition. Using the queue equation, the elementary relations are then combined
into the desired general relationship, which is then specialized to satisfy the boundary conditions.
Main difficulty of this approach is the estimation of consistent elementary relations where consistency
is defined as follows:
Generally the sequence of elementary transitions that move the system from one state Si to some
other state S-i is not unique. Let P [Si] be the probability that the system is in state Si, i= 1, 2. Using
the elementary relations, one can compute P[S2] starting with P[Si] and following a particular
sequence of elementary transitions leading from state Si to state S2. The elementary relations are then 1
said to be consistent if for any states Si, S2, P [S2] computed from P [Si] as outlined is the same re-
gardless of the sequence of elementary transitions that lead from Si to S2- If the number of states isjj
finite, and if all states are intercommunicating, as is the case here, then consistency implies that for
any state S the probability P[S] can be uniquely computed from P[So] where So is some arbitrarily
selected state, usually representing the empty system. Furthermore, the probabilities so computed
must by construction satisfy the steady-state equations, hence are indeed the unique solution to these |j
equations (see Cox and Miller [5, p. 185]).
For the problem at hand, we will need three elementary transitions concerning (1) reordering the
sequence of customers in the queue, (2) addition of a customer to the queue, and (3) exchanging a
customer between the head of the queue and a server position.
In view of the results for one finite and one infinite population [16, 17], we tentatively estimate that
(2a) Pt,U' x== Pt,h" x ; m = s;n >0
FIRST-COME FIRST-SERVED QUEUING 559
e ab . . . x is an arbitrary permutation of ab . . . x. For addition of a customer to the queue,
robabilities are to be related by
aft--* =-*..*-■.* «?-«,! •
Jy, exchanging a customer between the head of the queue and a server position is to be described by
Pab . . . xi = 1/, . . .. . , . n ab . . . xj
To determine the unknown functions of (2b, c) we first choose K and L in the queue equation (lc) so
Pk'.L" 1 an0 " P b K-a,L are positive while p~R,T is equal to zero for any K, L corresponding to ra + 5 + 1
>mers. Since aj = 0, -V-j > j max , at least one such equation exists, or the queue equation can
liminated from consideration entirely. Under the restrictions imposed on K and L, we have
At equal to zero, and relations (2a— c) inserted into (lc) yield
SfJL n + s V a ,K-a,LPK-a,L = «•+»-! W(a)-i P/f'-i*/.
Va,K-a,L — Ot n + s-\ A. a ,<(a)-i/ (sfjb n +s)-
neral, relations (2a-c) applied to (lc) result in
(a n + s Ar + Sfln + s) V a ,K-a,L P^'* = Ot n + s-\ A- a,t(a)-\ P%_a?L
r r
+ 2 S Vti) + l — &ij)Pn+i + lU>i,j,K+j,LVj, K ,Ll>a,K-a,LP b K- t ! e ,b'
i'=l, 1(0*0 j=l
and L in Equation (5) be chosen so that p°*i/ x is positive, and furthermore that the number of
tners is one less than the number of customers in (3). Then p b K i/ L is also positive, and Vj,k,l of
either equal to zero or given by (4) with subscripts properly changed. Now vj,k,l is equal to zero
+ s A j,t(j) is equal to zero, and (4) with changed subscripts applies in both cases, i.e.,
Vj.K.L ~ Otn + skj,t(j)l(sfl n + s+ i) .
the above conclusions, we derive from (5)
V a ,K-a,L = QTn + s-1 A a,t(a)-\l \ctn + s A T + Sfl n + $
r r -|
_ 2 2 U0') + * ~~ 8,j)/Lt„ + s+ i Wi,j,K + j,L a n + s\j,Hj) /(sUn + s-n ) .
i=l,I(i) *o j=\ J
)te that v a ,h-a,L has the same form as (4) if the Wi,j,K+j,L of (7) are such that
r r
^ — 51 2 CO') + i — 8«) «fj,iff+j,i.-Xj,« < /)/*=.o.
1=1; f(i)*0 j=i
560 K. TRUEMPER
This is easily achieved if
(9) m,j,K +j ,L = l(i)l[lU) + 1 - So] •
Proceeding inductively, we see that (4) and (9) used for (2b) and (2c), respectively, indeed satisfy a
queue equations. To ascertain that these relations are consistent is a straightforward matter, and wi:
be omitted. The next step is finding relations for equations (la, b). The procedure is analogous to the on
for the queue equation, leading inductively to the final result
(10)
pf,L x =\ n n Xi A mm 1 f ft«j-i//*jl i sn ]po,o;7vo,
L 1 = 1; /(,)#0 L j = J J l L j=l J J
which upon substitution into (la-c) confirms the correctness of the intermediate steps.
Applying (10) to a particular system, one can either develop formulas for system descriptors, o
at least evaluate such descriptors numerically, using
(11)
and
(12)
Pk,l= Y ._P?.r x =pf,L~ x .nllil *(»)!
a5. . .x i= 1
Po,o =
1+2 Prlpo,o
where pr is defined in section 2. For single-server systems, the above relations simplify to
(13) p T =ql\ fl if *ul /*(*)! Mil [°y-»/A*J 1 }po,o; T*0.
When utilizing the well-known relationships concerning system descriptors, for example L = \-W
(for a "last word," see Stidham [15]), one can compute the average arrival rate for population i, Aj, b)
(14)
\i = ^ l(0PqPK,L = ^ a q ki,t(i)PT
K,L T
since average arrival and service rates for each population must be equal in statistical equilibrium
Here, we will omit further details for the general case since the relevant formulas are easily available
from the literature (see, e.g., Morse [12]).
4. SPECIAL CASES
Several multipopulation queuing systems are special cases of the one described by (la-c). Wt,
will indicate a few such systems below.
1. Unbounded Queue Length
One can view such a system as the limit of a sequence of finite state space systems, and undei
rather mild assumptions steady-state probabilities p KtL are given by (11) if V pn.dpo.ois convergent
(see Feller [7, p. 455] for details and further references; for an elementary discussion, see Cox and
Smith [6, p. 44]).
FIRST-COME FIRST-SERVED QUEUING
561
2. Balking
If aj^l,"Fj^0, factor (1 — aj) may be viewed as the probability that an arriving customer
iy population balks (i.e., refuses to join the system) if the total number of customers already in the
•m is equal toy. In the general case the average rate of balking for population i is
^ (1 — a q )ki,ta)PT
td) is the arrival rate for that population when t (i) customers of type i are already in the system.
3. Finite and Infinite Populations
In general, computation of po,o of (12) involves a very large number of probabilities Pk,i. defined
1); similarly, evaluation of system descriptors typically requires extensive computational efforts.
)ecial cases, however, computations can be drastically reduced by the use of tables for certain
>tical distributions and/or queuing systems. As an example, we will consider a queuing system with
inite and one infinite population, noting that both applications cited in the Introduction may be
s type. Arrival rates are
OCqki,tU) =
(A-t(l))X,; *£ t(l) ^h
0; else
ie finite population, which is assumed to be of size h, and
OiqX2,t(2) — A-2
e infinite population. Service rates are \x.j = fx., V-j > 0, for any occupied server. Inserting these
into (10) and assuming X 2 < sfi, we get for =£ x *£ min {h, s}
P[x] = P [x finite population customers in the servers]
-{[-«"][C)(*rj (*H
+
7 )
2 7 i! '"-'2
{ [1 — 8 X(S ] p x y x - s -i + c(^T7 s -x)}po,o
j+i—x
i — x
mn
Po.o
<m
- 1 e
562 K. TRUEMPER
(is) s*=i (fyfriiUfi-^yn
i = x
\xj \SfxJ \ SfJL/
c = s s l
1--
SfJL
When A2 3= sfi, P[x]lpo,o becomes unbounded, and the system never attains statistical equilibrium
In view of the results for infinite population queues cited above, K-z < s/iis therefore necessary andsuffi
cient for the existence of a steady-state solution, and P\_x] is given by (17) in that case.
In (18), both fix and £ x can be computed from the finite queuing tables by Peck and Hazelwooi
[13] by formulas of Votaw and Peck [18], while y s -x-\ and r) s -xare available from tables of the Poissoi
distribution (Molina [11]) and binomial distribution (for references, see Feller [7, p. 148]), respectively
Further, the binomial distribution may be approximated by the Poisson distribution, and both may bt
approximated by the normal distribution (Feller [7, p. 154, 188, 245]). The factor s s /s\ of c can bi
Stirling's formula be estimated as e s /(27rs) 1/2 (Feller [7, p. 52]).
From (17) the probability that the system is empty is
(19)
{min(ft,s) 1
^ [(1— 5x,s) fixys-x-l+c(£, x T)s-x)] |"
For a single-server system, (19) reduces to
(20) Po,o=(1-WaO£o
where £ is the probability that a single-server system for one finite population is empty when the arriva
rate is given by (15) and the service rate is equal to (/jl — Kz) 'Po, o of Equation (20) is also obtainec
when p T of (13) is inserted into (12), and is readily computed using the finite queuing tables [13].
In an analogous fashion, formulas for system descriptors can be developed and related to existing
tables. We will omit details since equations (17) and (18) indicate the general approach. For single
server systems, the relevant formulas are provided in [16, 17]. Lastly, it should be noted that the
above results are easily extended to a system with one finite and several infinite populations by defining
\2 in (17)-(20) to be the sum of the arrival rates for the infinite populations.
5. OPTIMIZATION ASPECTS
The definitions of arrival and service rates in section 2 admit formulation of a variety of optimization
problems involving state-dependent control of these rates. Computational difficulties lie in the factl
that in general the control-dependent steady-state probabilities lead to criterion functions which
FIRST-COME FIRST-SERVED QUEUING
563
iot be optimized by reasonable computational efforts. However, special cases lend themselves to
solution. For example, if all populations are infinite, results for single population queuing systems
pe M/M/s (see, e.g., Meyer [10]) can often be adapted to the system at hand. When one finite and
ral infinite populations provide input, the computational approach outlined in section 4 may be
loyed.
In contrast to the arrival control problem for two classes of customers considered by Scott [14],
rill explore a particular optimization problem involving addition and removal of a server, which is
;cial service rate control problem. In the system of section 2, let one server be added when queue
h is equal to n\ 3= 1 and one arrival occurs. As long as queue length is greater than ti\, the additional
:r is active. However, when queue length is equal to n\ and one server finishes service, that server
>sed. We will first develop the steady-state probabilities, then present an application.
For the modified system, equations (la-c) hold when n < ni. Then
) (a,, + g kT + SIX n + s )PK b ,l' ' X = a " + s - 1 A. Q> ((«)- tPK-d, X L
r
+ X VU)+1) Pn + s+ipf;i;+f; m = s; n—ni
) (an + t+ i\T+(s+l)fln + s + l)pf- L - x = 2
Otn + sK, Ha)- lP K 'Ja + i, L - i
+ S^') + i-8ii)M« + . + < i ;^ +j
J=i
; m = s + 1; n = n\
(a n + s + l\T+ (s+l)lln + S + l)pf- L - X =OCn + sK, t(a)- lP^".'^
i=h l(i) ^0 j=l
applying the solution approach of section 3, we have
f r r<(i)-l 1 / I
J f[ aj-ilnj I [s min( "-"i ) (5 + l) max( "-"i' 0) ]}po > o; T^O.
■emaining formulas concerning Pk.l.Pt, etc., are those of section 3.
rhe above system occurs in certain models of multiprocessor computer systems. Suppose two
j ssors solve high priority problems generated by r 3= 1 populations as well as low priority problems
ible in unlimited number (e.g., low priority processing time is sold to a computer network). Each
ssor is assigned to a separate block of fast memory, and has access to either class of problems,
system may be operated under any one of many possible priority rules. We will consider two rules
564 K. TRUEMPER
of the preemptive-resume priority type. Under rule 1, high priority jobs are handled FCFS and displac
any low priority job in either processor upon arrival. Depending on arrival and service rates for hig
priority customers, this rule may lead to frequent "swaps" (removal of a job from fast memory an
subsequent reinsertion) of low priority jobs. Rule 2 has one processor dedicated to low priority jot
unless the number of high priority jobs in the system exceeds rii + \ where /ii 3= 1. Selection of r|i
obviously affects the average rate of swaps as well as the expected waiting time for high priority job.
Bell [3] has considered a related FCFS single-server system with two infinite populations. Besides ii
cost of waiting for high priority customers, a penalty cost is incurred whenever the server elects to serv
a high priority customer not at the head of the queue. Additional references concerning the singli
population case are also provided. Queuing models by Avi-Itzhak and Heyman [2] for single processc-
multiprogramming computer systems very well complement the model described above on certain
multiprocessor computer system applications where processors do not interact. In such systems on
can decompose scheduling decision problems into two parts to obtain approximate solutions. First
the priorities problem is examined under simplifying assumptions for processing rules; the abovi
model, as well as many others (see Jaiswal [8]), are potentially applicable. In part two, the concept
developed by Avi-Itzhak and Heyman are then employed to obtain detailed results for each priorir ;
class, utilizing system descriptor results of the priorities model.
Returning to the application at hand, we consider two FCFS single channel queuing system
with input from the high priority job populations. We will suppose that interarrival and service time:
satisfy the assumptions of the system considered in section 2. Under rule 1, we have a two-servei!
system, so the steady-state probabilities are given by (10) and (11), and will be denoted by p A , t(0)
Rule 2 leads to a single-server system with server addition (removal) when queue length is equal to rJ
the number of busy servers is one (two), and an arrival (service completion) occurs. Note that the servei
that is added is not necessarily the one that is removed. However, either processor can handle higll
or low priority jobs, so for our purposes it is irrelevant which type of job is assigned to a particular
processor. Hence steady-state probabilities, denoted by Pk,l{ti\), are given by (21) and (11). Utilizing
L = k-W, expected steady-state waiting times for high priority jobs, W(0) and W{n\ ) under rules 1 and||
2, respectively, can be computed. In analogous notation, the steady-state rates for swaps are by elemen|
tary arguments
e(0)=fi
2>o,*(0) + 2 £ po.ao)
i=\ L:m=2
(22)
i(/li) = /i jpo,i(Bi)+? ^ Pa-.lUi)
L i=l . A':n=n,:L:m=2 J
where /x is the service rate for high priority jobs.
Applying relevant cost/profit relationships to the above descriptors, the optimal rule is easily
selected. For particularly simple systems, closed form expressions for the descriptors can sometimes
be obtained, and minimization of the total cost function with respect to n\ 3=0 (where rai=0 denotes
rule 1) is straightforward.
FIRST-COME FIRST-SERVED QUEUING 565
jUMMARY
This paper establishes steady-state probabilities for a class of FCFS multipopulation queuing
i?ms, assuming exponential interarrival and service times. The solution approach is an extension
aethods employed by Morse [12] for single population queues, and appears promising for other
ipopulation queues of the exponential type.
(Future research on the topic of FCFS multipopulation queues seems desirable. In particular,
ilts for systems with nonexponential interarrival and service time distributions would be valuable,
ther point should also be mentioned. It may be very difficult to obtain exact formulas for certain
iS systems, or the formulas may involve extensive computational efforts for evaluation. Simple and
Ko-apply approximation formulas would be of practical value, provided reasonable bounds can be
blished for the errors introduced by such approximations.
REFERENCES
I Ancker, C. J. and A. V. Gafarian, "Queuing with Multiple Poisson Inputs and Exponential Service
Times," Operations Research 9, 321-327 (1961).
Avi-Itzhak, B. and D. P. Heyman, "Approximate Queing Models for Multiprogramming Computer
Systems," Operations Research 21, 1212-1230 (1973).
Bell, C. E., "Efficient Operation of Optional-Priority Queuing Systems," Operations Research
27,777-786(1973).
Cohen, J. W., "Certain Delay Problems for a Full Availability Trunk Group Loaded by Two
Traffic Sources," Communication News 16, 105-113 (1957).
Cox, D. R. and H. D. Miller, The Theory of Stochastic Processes (Methuen and Company, London,
1965).
Cox, D. R. and W. L. Smith, Queues (Chapman and Hall, London, 1961).
Feller, W., An Introduction to Probability Theory and Its Applications — Volume I, (Wiley and Sons,
New York. 1968) 3rd Edition.
Jaiswal, N. K., Priority Queues (Academic Press, New York, 1968).
Kotiah, T. C. T. and N. B. Slater, "On Two-Server Poisson Queues with Two Types of Customers,"
Operations Research 21, 507-603 (1973).
Meyer, K. H. F., W artesysteme mit Variabler Bearbeitungsrate (Springer Verlag, Berlin, 1971).
Molina, E. C, Poissons's Exponential Binomial Limit (Van Nostrand, New York, 1942).
Morse, P. M., Queues, Inventories and Maintenance (Wiley and Sons, New York, 1958).
Peck, L. G. and R. N. Hazelwood, Finite Queuing Tables (Wiley and Sons, New York, 1958).
Scott, M., "Queuing with Control on the Arrival of Certain Type of Customers," J. Canadian
Operational Research Soc. 8, 75-86 (1970).
Stidham. S., "A Last Word on L = A • Wr Operations Research 22, 417-421 (1974).
Truemper, K., "Single-Channel Single-Server Queues with Input from Both a Finite and an
Infinite Population," Master's Thesis, University of Iowa (1969).
Truemper, K., "Queuing with Poisson Arrivals from Both a Finite and Infinite Population,"
AIIE Transactions 4, 223-227 (1972).
Votaw, D. F. and L. G. Peck, "Remarks on Finite Queuing Tables," Operations Research 16,
1084-1086(1968).
!
\
INFORMATION FOR CONTRIBUTORS
he NAVAL RESEARCH LOGISTICS QUARTERLY is devoted to the dissemination of
ihc information in logistics and will publish research and expository papers, including those
tain areas of mathematics, statistics, and economics, relevant to the over-all effort to improve
ficiency and effectiveness of logistics operations.
lanuscripts and other items for publication should be sent to The Managing Editor, NAVAL
ARCH LOGISTICS QUARTERLY, Office of Naval Research, Arlington, Va. 22217.
manuscript which is considered to be suitable material tor the QUARTERLY is sent to one
ire referees.
lanuscripts submitted for publication should be typewritten, double-spaced, and the author
J retain a copy. Refereeing may be expedited if an extra copy of the manuscript is submitted
he original.
v short abstract (not over 400 words) should accompany each manuscript. This will appear
head of the published paper in the QUARTERLY.
here is no authorization for compensation to authors for papers which have been accepted
.blication. Authors will receive 250 reprints of their published papers.
eaders are invited to submit to the Managing Editor items of general interest in the held
;istics, for possible publication in the NEWS AND MEMORANDA or NOTES sections
QUARTERLY.
NAVAL RESEARCH
LOGISTICS
QUARTERLY
SEPTEMBE
VOL 23, f
NAVSO P-
CONTENTS
ARTICLES
A Survey of Maintenance Models: The Control and Surveillance of De-
teriorating Systems
Markovian Deterioration with Uncertain Information — A More General
Model
An Algorithm for the Segregated Storage Problem
Optimal Facility Location under Random Demand with General Cost
Structure
Some Comparative & Design Aspects of Fixed Cycle Production Systems
Determining Adjacent Vertices on Assignment Polytopes
Improved Convergence Rate Results for a Class of Exponential Penalty
Functions
Estimation of Strategies in a Markov Game
Flowshop Sequencing Problem with Ordered Processing Time Matrices:
A General Case
Mean Square Invariant Forecasters for the Weibull Distribution
A Comparison of Several Attribute Sampling Plans
Suboptimal Decision Rule for Attacking Targets of Opportunity
Occupational Structure in the Military and Civilian Sectors of the Economy
Armor Configurations via Dynamic Programming
On Multipopulation Queuing Systems with First-Come First-Served
Discipline
W. P. PIERSKALL/
J. A. VOELKEI
D. ROSENFTELI
A. W. NEEBI
M. R. RA(
V. BALACHANDRA?
S. JAP
A. L. SOYSTEl
D. I. TOOl
P. G. MCKEOVP
F. H. MURPffl
J. A. FILAI
M. L. SMITI
S. S. PANWALKAI
R. A. DUDEl
TIAGO DE OLIVEIR/
S. B. LITTAUEI
A. E. GELFANI
T. KIS
S. E. HABEI
R. E. SHEAI
A. L. ARBUCKL1
V. B. KUCHEI
K. TRUEMPEI
OFFICE OF NAVAL RESEARCH
Arlington, Va. 22217